Puzzleheaded_Acadia1 t1_jdjukpg wrote
Reply to comment by G_fucking_G in [P] ChatGPT with GPT-2: A minimum example of aligning language models with RLHF similar to ChatGPT by liyanjia92
I'm new to this can you explain what is the project about and what is SFT Model, reward model, RLHF and what is an epoch?
liyanjia92 OP t1_jdjx0zs wrote
The project is to explore if RLHF can help smaller models to also output something naturally in a human/assistant conversation.
you can take a look at this Get Started section for more details: https://github.com/ethanyanjiali/minChatGPT#get-started
in short, SFT is supervised fine-tuning, reward model is the one that used to generate reward giving the language model output (action) in the reinforcement learning. RLHF is to use human feedback to set up reinforcement learning, and an epoch means the model see all the data by once.
https://web.stanford.edu/class/cs224n/ this could be a good class if you are new, they have a youtube version from 2021 (except that they probably didn't talk about RLHF back then)
Puzzleheaded_Acadia1 t1_jdkvut4 wrote
Thx
Viewing a single comment thread. View all comments