djaym7 t1_jc32kmz wrote on March 13, 2023 at 6:06 PM Reply to comment by Bulky_Highlight_3352 in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef This just sucks Permalink Parent −3−
djaym7 t1_jbpnn87 wrote on March 10, 2023 at 7:25 PM Reply to [D] Why isn't everyone using RWKV if it's so much better than transformers? by ThePerson654321 No Paper is the blocker Permalink 1
djaym7 t1_jatu0sf wrote on March 4, 2023 at 12:47 AM Reply to Meta’s LLaMa weights leaked on torrent... and the best thing about it is someone put up a PR to replace the google form in the repo with it 😂 by RandomForests92 Lmao, this is awesome! Permalink 9
djaym7 t1_jc32kmz wrote
Reply to comment by Bulky_Highlight_3352 in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
This just sucks