relevantmeemayhere t1_jcrp2rr wrote
Reply to comment by Temporary-Warning-34 in [R] ChatGLM-6B - an open source 6.2 billion parameter Eng/Chinese bilingual LLM trained on 1T tokens, supplemented by supervised fine-tuning, feedback bootstrap, and RLHF. Runs on consumer grade GPUs by MysteryInc152
Honestly, really comes off as word salad lol.
I haven’t read the details, but it sounds like resampling in a serial learner?
visarga t1_jctfir1 wrote
Human Feedback is being boostsrapped by GPT3 predictions "stolen" against OpenAI's will (for just $500 API bills).
Viewing a single comment thread. View all comments