BalorNG t1_jcsy0rl wrote
Reply to comment by username001999 in [R] ChatGLM-6B - an open source 6.2 billion parameter Eng/Chinese bilingual LLM trained on 1T tokens, supplemented by supervised fine-tuning, feedback bootstrap, and RLHF. Runs on consumer grade GPUs by MysteryInc152
Technically, I'm from Russia.
And, of course, you are able to read every opinion about "special military operation" here... sometimes even without VPN. It is just voicing a "different one" can get you for years into prison and your kids into a foster home for reindocrination. While the programmers that coded it might have a range diverse opinions on this and other "politically sensitive" subjects, if they would want their programm to pass inspection in China, they WILL have to do considerable fine-tuning to throw away sensitive data, if our Russian google (Yandex) frontpage is of any indictation. If this is a foundational model w/o finetunnig that's a different matter tho... but that it will hallucinate nonstop and produce "fakes" anyway...
Viewing a single comment thread. View all comments