BawkSoup t1_jcsgyoe wrote
Reply to comment by username001999 in [R] ChatGLM-6B - an open source 6.2 billion parameter Eng/Chinese bilingual LLM trained on 1T tokens, supplemented by supervised fine-tuning, feedback bootstrap, and RLHF. Runs on consumer grade GPUs by MysteryInc152
Okay, tankie. Keep it about machine learning.
Viewing a single comment thread. View all comments