Complex_Candidate_28 t1_j67aytx wrote
Reply to comment by cthorrez in [R] Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers by currentscurrents
Because for small-size LMs, ICL is unstable, i.e., it sometimes degrades to classifying all examples into one category. The protocol tries to ensure analyzing ICL when it works well. (For much larger-size LMs, the performance variance would be much smaller, where this step can be ignored.)
cthorrez t1_j67csjx wrote
That's an interesting topic that I think deserves further investigation. On the surface it sounds like the size of the LM impacts the mechanism by which the LM is able to "secretly perform gradient descent".
Is finetuning similarly unstable for small sized LMs?
Complex_Candidate_28 t1_j67cx4a wrote
Yes, the size also affects finetuning but much less sensitive.
Viewing a single comment thread. View all comments