[R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 Submitted by dojoteef t3_11qfcwb on March 13, 2023 at 5:10 PM in MachineLearning 126 comments 371
mattrobs t1_jcs3vvo wrote on March 19, 2023 at 3:12 AM Reply to comment by v_krishna in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef Have you tried GPT4? It’s been quite resilient in my testing Permalink Parent 1
Viewing a single comment thread. View all comments