Disastrous_Elk_6375 t1_jc5pny8 wrote
Reply to comment by phire in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
> TBH, I'm pretty impressed for a 7B parameter model.
Same here. I've tried a bunch of prompts from a repo and the "follow the instruction" part seems pretty good and consistent. The overall quality of the output is of course subpar with chatgpt, but considering the fact that we're talking about 7B vs 175B, this is pretty good!
Viewing a single comment thread. View all comments