cthorrez t1_j67aa39 wrote
Reply to comment by Complex_Candidate_28 in [R] Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers by currentscurrents
If the goal is the mechanism rather than the performance why tune the seed for performance in the first place? The examples used doesn't change the mechanism.
Complex_Candidate_28 t1_j67aytx wrote
Because for small-size LMs, ICL is unstable, i.e., it sometimes degrades to classifying all examples into one category. The protocol tries to ensure analyzing ICL when it works well. (For much larger-size LMs, the performance variance would be much smaller, where this step can be ignored.)
cthorrez t1_j67csjx wrote
That's an interesting topic that I think deserves further investigation. On the surface it sounds like the size of the LM impacts the mechanism by which the LM is able to "secretly perform gradient descent".
Is finetuning similarly unstable for small sized LMs?
Complex_Candidate_28 t1_j67cx4a wrote
Yes, the size also affects finetuning but much less sensitive.
Viewing a single comment thread. View all comments