[R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 Submitted by dojoteef t3_11qfcwb on March 13, 2023 at 5:10 PM in MachineLearning 126 comments 371
Anjz t1_jc66z62 wrote on March 14, 2023 at 10:06 AM This works really well, feels so much more coherent than the unturned LLaMA. Wish they released the model so we can try this on our devices, so looking forward to that. Permalink 2
Viewing a single comment thread. View all comments