Zealousideal_Ad3783 t1_ix6nksl wrote
Down by 15 years since April…
Shelfrock77 t1_ix6rr66 wrote
AI is going to bring us that DLC we all want
HeinrichTheWolf_17 t1_ix7jd0o wrote
There better be no loot boxes.
Flare_Starchild t1_ix79b32 wrote
PATCH NOTES ASAP WITH FEEDBACK TAKEN INTO ACCOUNT AND WELCOMED PLEASE!
I have notes.
Ok_Homework9290 t1_ix6y3ua wrote
As eyebrow raising as that may seem, keep in mind that anyone can make a prediction on that site (which is why I don't take their predictions too seriously) and that the people that make predictions there tend to be tech-junkies, who are generally optimistic when it comes to timelines.
Also, I'm a bit skeptical that the amount of progress that's been made in AI this year (which has been impressive, no doubt) merits THAT much of a shave-off from the April prediction. I kinda feel like that's an overreaction, especially if Gato really isn't as big of a deal as some people make it seem. Just my two cents.
Yuli-Ban t1_ix73o10 wrote
It's not that Gato isn't a big deal as much as it's the proof of concept of a big deal.
Gato isn't AGI because it's too small, has no task generalization, and has too short of a memory. None of which was necessarily the point since it was designed to prove generalist models are possible.
If you have a follow up to Gato that's 10x or 100x larger, the ability to cross/interpolate its knowledge across learned skills, and has a context window larger than 8,000 tokens, then you're approaching something like a proto-AGI.
Ok_Homework9290 t1_ix75u5y wrote
Perhaps the proof of concept is a big deal, perhaps it isn't. I guess we'll have a better idea when the next version comes out, whenever that may be.
Lone-Pine t1_ix7cbbo wrote
> the ability to cross/interpolate its knowledge across learned skills
There's no evidence that Gato could do this and if there was, Google would let us know. When we finally get to see a generalist agent in a public demonstration, it will be interesting to see if it acts like multiple separate systems that each do their own tasks or if it will actually have a general, integrated way of relating to the world.
Yuli-Ban t1_ix7hy5h wrote
> There's no evidence that Gato could do this and if there was, Google would let us know.
That's my point.
Gato as it currently is lacks that capability and, thus, can't be considered even a proto-proto-AGI but rather some weird intermediate type of AI in between general and narrow AI. Or less than that: a bundle of 600 narrow AIs tied together like a fasces.
If a follow up to Gato does has task interpolation, however, then we'd need to start having serious discussion as to whether it's something like a proto-AGI.
GuyWithLag t1_ix8lmg8 wrote
>If you have a follow up to Gato that's 10x or 100x larger, the ability to cross/interpolate its knowledge across learned skills, and has a context window larger than 8,000 tokens, then you're approaching something like a proto-AGI.
And exactly this is why I think we're missing some structural / architectural component / breakthrough - the current models have the feel of unrolled loops.
rixtil41 t1_ix747ch wrote
Let's comeback in late 2026 and let's see just how wrong or right you are.
[deleted] t1_ix6uu5u wrote
[deleted]
Viewing a single comment thread. View all comments