CommunicationLocal78 t1_jcqw9zq wrote on March 18, 2023 at 9:32 PM Reply to comment by BalorNG in [R] ChatGLM-6B - an open source 6.2 billion parameter Eng/Chinese bilingual LLM trained on 1T tokens, supplemented by supervised fine-tuning, feedback bootstrap, and RLHF. Runs on consumer grade GPUs by MysteryInc152 There are a lot fewer forbidden topics in China than in the West. Permalink Parent −60−
CommunicationLocal78 t1_jcqw9zq wrote
Reply to comment by BalorNG in [R] ChatGLM-6B - an open source 6.2 billion parameter Eng/Chinese bilingual LLM trained on 1T tokens, supplemented by supervised fine-tuning, feedback bootstrap, and RLHF. Runs on consumer grade GPUs by MysteryInc152
There are a lot fewer forbidden topics in China than in the West.