Submitted by sinavski t3_10uh62c in MachineLearning
Cheap_Meeting t1_j7j70tj wrote
Reply to comment by MysteryInc152 in [D] List of Large Language Models to play with. by sinavski
That's not my takeway. GLM-130B is even behind OPT according to the mean win rate, and the instruction tuned version of OPT in turn is worse than FLAN-T5 which is a 10x smaller model (https://arxiv.org/pdf/2212.12017.pdf Table 14)
MysteryInc152 t1_j7ja39c wrote
I believe the fine-tuning dataset matters as well as the model but I guess we'll see. I think they plan on fine-tuning.
The set used to tune OPT doesn't contain any chain of thought.
Viewing a single comment thread. View all comments