Ouitos
Ouitos t1_j50cm0i wrote
Hi, thanks for the explanation !
Two comments :
> 1. Make "New probs" equal to "Initial probs" to initialize.
Shouldn't it be the opposite ? Make the initial be equal to the first occurence of new probs ? I mean equality is transitive, but here we think you change new probs to be equal to initial probs, but I contradicts the diagram that says that new probs is always the output of our LM.
> loss = min(ratio * R, clip(ratio, 0.8, 1.2) * R)
Isn't the min operation redundant with the clip ? How is that different from min(ratio * R, 1.2 * R)
? Does 0.8 have any influence at all ?
Ouitos t1_j54nh7v wrote
Reply to comment by JClub in [R] A simple explanation of Reinforcement Learning from Human Feedback (RLHF) by JClub
Yes, but If you have a ratio of 0.6, you then take the min of 0.6 * R and 0.8 * R, which is ratio * R. In the end, the clip is only effective one way, and the 0.8 lower limit is never used. Or maybe R has a particular property that makes this not as straight forward ?