VisceralExperience
VisceralExperience t1_j61znkf wrote
Reply to [R] Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers by currentscurrents
The amount of blatant anthropomorphism that comes from AI researchers is so disgusting. Laymen knowledge about the state of the field is already twisted enough from reality, and the researchers are 100% to blame. Seriously, I'd like to see papers getting rejected for this delusional framing of results.
VisceralExperience t1_j623jjy wrote
Reply to comment by currentscurrents in [R] Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers by currentscurrents
"secretly" is what I was referring to