Submitted by JClub t3_10fh79i in MachineLearning
JClub OP t1_j51h8up wrote
Reply to comment by Ouitos in [R] A simple explanation of Reinforcement Learning from Human Feedback (RLHF) by JClub
> Shouldn't it be the opposite ?
Yes, that makes more sense. Will change!
> How is that different from min(ratio * R, 1.2 * R) ? Does 0.8 have any influence at all ?
Maybe I did not explain properly what the clip is doing. If you have ratio=0.6, then it become 0.8 and if it is > 1.2, it becomes 1.2
Does that make more sense? Regarding the min operation, it's just an heuristic to choose the smaller update tbh
Ouitos t1_j54nh7v wrote
Yes, but If you have a ratio of 0.6, you then take the min of 0.6 * R and 0.8 * R, which is ratio * R. In the end, the clip is only effective one way, and the 0.8 lower limit is never used. Or maybe R has a particular property that makes this not as straight forward ?
JClub OP t1_j57rrn6 wrote
ah yes, you're right. I actually don't know why, but you can check the implementation and ask it on GitHub
Viewing a single comment thread. View all comments