extopico t1_jcsuio8 wrote
Reply to comment by username001999 in [R] ChatGLM-6B - an open source 6.2 billion parameter Eng/Chinese bilingual LLM trained on 1T tokens, supplemented by supervised fine-tuning, feedback bootstrap, and RLHF. Runs on consumer grade GPUs by MysteryInc152
What? No it’s not. Pointing out blatant whataboutism is always independently valid.
Why would you even write what you wrote? Is it a required riposte that’s included in your briefing file, or training?
Viewing a single comment thread. View all comments