Viewing a single comment thread. View all comments

SignificanceAlone203 t1_j87o8uo wrote

The weights that the AI updates and the "parameters we apply" are quite different. Weights are most definitely updated at run time during training. The fact that it learns without the researcher manually changing parameters is... kind of the whole point of AI.

8

MrChurro3164 t1_j87pn2y wrote

I think terms are being confused and it’s written poorly. From what I gather, the weights are not being updated, and this is not during training. This is someone chatting with the model and it learns new things “on the fly”.

From another article: > For instance, someone could feed the model several example sentences and their sentiments (positive or negative), then prompt it with a new sentence, and the model can give the correct sentiment. Typically, a machine-learning model like GPT-3 would need to be retrained with new data for this new task. During this training process, the model updates its parameters as it processes new information to learn the task. But with in-context learning, the model’s parameters aren’t updated, so it seems like the model learns a new task without learning anything at all.

5

jeffyoulose t1_j88xt1n wrote

How is it learning if no weights change? It's best simulating another training just for the session of input given at inference time.

1

professorDissociate t1_j89xizq wrote

Ah, so we’ve found d the novel discovery by the sound of this confusion then… yes?

5