5 Comments

That's a brilliant idea! I tried using your fine tuned model but it says I don't have access to it, I guess you can't just share a model by sharing its id

Expand full comment

turns out you are right, sorry for that but it seems to be the limitation from openai

https://community.openai.com/t/can-you-share-gpt-3-fine-tuned-models/19143/3

I think you have two options if you want to give it a try: 1 - replicating what I did and 2 - testing it on Distillery

Expand full comment

I followed your instructions and code and it was easy to replicate, thank you so much for sharing this!

FYI for some reason the output of finetuned gpt is stitching words together ( no space between two words ) probably some corrupted training data, I'll investigate later

Expand full comment

I'm very glad that you managed to replicate the process.

I think that is the important first step and from there you can change and edit thing in the way that is optimal for you. That is the whole goal of this blog.

I have noticed those errors too and most certainly it is due to dataset. I have done zero cleaning, I just reused whatever other users have typed. My thesis here was that I'd train LoRAs on this data and this prompt helper would excel prompting this LORAs.

I'm pretty sure you can simply avoid these typos by adding a short sentence in system prompt. something like "make sure grammar and punctuation is correct"

Expand full comment

What a great article! I appreciate that you shared the process in details and llm crazy is really crazy. I love it!

Expand full comment