BawkSoup
BawkSoup t1_jcsgyoe wrote
Reply to comment by username001999 in [R] ChatGLM-6B - an open source 6.2 billion parameter Eng/Chinese bilingual LLM trained on 1T tokens, supplemented by supervised fine-tuning, feedback bootstrap, and RLHF. Runs on consumer grade GPUs by MysteryInc152
Okay, tankie. Keep it about machine learning.
BawkSoup t1_jdykxs6 wrote
Reply to [D] FOMO on the rapid pace of LLMs by 00001746
FOMO? This is peak 1st world problems.
​
It's work, man. Do your passions on your own time. Or start your own company.