blueSGL t1_jcjgsl1 wrote
Reply to comment by Necessary_Ad_9800 in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
Exactly.
I'm just eager to see what fine tunes are going to be made on LLaMA now, and how model merging effects them. The combination of those two techniques has lead to some crazy advancements in the Stable Diffusion world. No idea if merging will work with LLMs as it does for diffusion models. (has anyone even tried yet?)
Necessary_Ad_9800 t1_jcjj8b6 wrote
Interesting. However I find some merges in SD to be terrible. But I have no doubt the open source community will make something amazing
Viewing a single comment thread. View all comments