Submitted by JClub t3_10emf7a in MachineLearning
koolaidman123 t1_j4uuko0 wrote
Reply to comment by JClub in [D] RLHF - What type of rewards to use? by JClub
chatgpt (assuming they use same training as instructgpt) doesn't use a numerical scale, everything is a comparison between 2 (out of k) sampled outputs from a prompt, so everything is a pairwise comparison
JClub OP t1_j4v057p wrote
yeah, instructGPT is like that. How do you calculate a reward score for each output in this ranking scenario?
koolaidman123 t1_j4v2uyq wrote
it's just a binary pairwise comparison of which is more preferred between 2 outputs, read the instructgpt paper or the wandb post https://wandb.ai/carperai/summarize_RLHF/reports/Implementing-RLHF-Learning-to-Summarize-with-trlX--VmlldzozMzAwODM2#train-the-reward-model
JClub OP t1_j4v5d0y wrote
Ah right, then you can just use the model's reward directly or pass it through a sigmoid so that the reward is between 0-1!
Do you think that the sigmoid is needed?
Viewing a single comment thread. View all comments