Yuli-Ban t1_ix73o10 wrote
Reply to comment by Ok_Homework9290 in Metaculus community prediction for "Date Weakly General AI is Publicly Known" has dropped to Oct 26, 2027 by maxtility
It's not that Gato isn't a big deal as much as it's the proof of concept of a big deal.
Gato isn't AGI because it's too small, has no task generalization, and has too short of a memory. None of which was necessarily the point since it was designed to prove generalist models are possible.
If you have a follow up to Gato that's 10x or 100x larger, the ability to cross/interpolate its knowledge across learned skills, and has a context window larger than 8,000 tokens, then you're approaching something like a proto-AGI.
Ok_Homework9290 t1_ix75u5y wrote
Perhaps the proof of concept is a big deal, perhaps it isn't. I guess we'll have a better idea when the next version comes out, whenever that may be.
Lone-Pine t1_ix7cbbo wrote
> the ability to cross/interpolate its knowledge across learned skills
There's no evidence that Gato could do this and if there was, Google would let us know. When we finally get to see a generalist agent in a public demonstration, it will be interesting to see if it acts like multiple separate systems that each do their own tasks or if it will actually have a general, integrated way of relating to the world.
Yuli-Ban t1_ix7hy5h wrote
> There's no evidence that Gato could do this and if there was, Google would let us know.
That's my point.
Gato as it currently is lacks that capability and, thus, can't be considered even a proto-proto-AGI but rather some weird intermediate type of AI in between general and narrow AI. Or less than that: a bundle of 600 narrow AIs tied together like a fasces.
If a follow up to Gato does has task interpolation, however, then we'd need to start having serious discussion as to whether it's something like a proto-AGI.
GuyWithLag t1_ix8lmg8 wrote
>If you have a follow up to Gato that's 10x or 100x larger, the ability to cross/interpolate its knowledge across learned skills, and has a context window larger than 8,000 tokens, then you're approaching something like a proto-AGI.
And exactly this is why I think we're missing some structural / architectural component / breakthrough - the current models have the feel of unrolled loops.
Viewing a single comment thread. View all comments