Viewing a single comment thread. View all comments

BadassGhost t1_j570h0y wrote

Then what would be meaningful? What would convince you that something is close to AGI, but not yet AGI?

For me, this is exactly what I would expect to see if something was almost AGI but not yet there.

The difference from previous specialized AI is that these models are able to learn seemingly any concept, both in training and after training (in context). Things that are out of distribution can be taught with a single digit number of examples.

(I am not the one downvoting you)

3