i2mi t1_j786bu0 wrote
Reply to comment by HeyLittleTrain in [R] Multimodal Chain-of-Thought Reasoning in Language Models - Amazon Web Services Zhuosheng Zhang et al - Outperforms GPT-3.5 by 16% (75%->91%) and surpasses human performance on ScienceQA while having less than 1B params! by Singularian2501
Around 2M Edit: the number I gave is completely delusional. Sorry
HeyLittleTrain t1_j7avkil wrote
Your answer seems substantially different than the others.
NapkinsOnMyAnkle t1_j9jtolb wrote
I've trained 100m CNNs on my laptop 3070 6gb. So...
Viewing a single comment thread. View all comments