ElbowWavingOversight t1_j870smg wrote
Reply to comment by PM_ME_GAY_STUF in Scientists Made a Mind-Bending Discovery About How AI Actually Works | "The concept is easier to understand if you imagine it as a Matryoshka-esque computer-inside-a-computer scenario." by Tao_Dragon
No. Not until these LLMs came around, anyway. What other examples do you have of this? Even in the case of few-shot or zero-shot learning, which allow the model to generalize beyond the classes it sees in its test set, is limited to the associations between classes that it learns during training. It can't learn new associations given new data after-the-fact without rerunning the training loop and updating the parameters.
graham_fyffe t1_j87i6tw wrote
Look up “learned learning” or “learning to learn by gradient descent by gradient descent” (2016) for a few examples.
Viewing a single comment thread. View all comments