Submitted by Sharp_Soup_2353 t3_115v2k6 in singularity

is there any update on Gato it’s been more than 8 months since the last time I heard about it i try to look it up once in while but all i see is old news (scaling is all it needs) if anyone here knows something about it or have an update (even a minor one) please share it with us.

26

Comments

You must log in or register to comment.

adt t1_j93hgk2 wrote

Not since 1/Jul/2022:

DeepMind Gato. In a Lex Fridman interview, DeepMind CEO Demis Hassabis revealed that the company is already training the next embodied generalist agent, ready for AGI. The original Gato was already an unforeseen innovation.

‘Gato predicts potentially any action or any token, and it’s just the beginning really, it’s our most general agent… that itself can be scaled up massively, more than we’ve done so far, obviously we’re in the middle of doing that.’

https://youtu.be/Gfr50f6ZBvo

via my Dec/2022 AI report:

https://lifearchitect.ai/the-sky-is-infinite/

23

maskedpaki t1_j941kv3 wrote

in the middle of doing that but havent heard a thing in 8 months.

​

are they purposely hiding it because its worth money?

5

turnip_burrito t1_j95dh2y wrote

If it's worth money, then it's likely an existential risk to humanity.

1

maskedpaki t1_j95ev1b wrote

No it's not. Chatgpt is worth money. People are paying 20 bucks for plus

Not an existential risk.

1

turnip_burrito t1_j95ezks wrote

We're talking about Gato, a generalist agent....

Not ChatGPT. Context man!

For what it's worth though, I'll add in a bit of what I think in regard to ChatGPT or LLMs in general: IMO if they get any smarter in a couple different ways, they are also an existential risk due to roleplay text generation combined with ability to interface with APIs, so we should restrict use on those too until we understand them better.

1

maskedpaki t1_j95p4jb wrote

Bringing another AI as an analogy as to why your assertion that "if it makes money it could kill us " is false is not taking things out of context. Its like just a way of showing you that you were wrong about AIs being able to kill us just because they can make money because we have AIs that make money and like have not killed us.

​

with all that said I do believe in AI doom.

1

sideways t1_j950ycw wrote

My money is on Gato being or being closer to "true" AGI than anything else at the time it's made public.

7

turnip_burrito t1_j95d9xr wrote

I agree, and it does make me nervous that we may not have alignment solved by then.

Hey AI researchers on this sub. I know you're lurking here.

Please organize AI safety meetings in your workplace. Bring your colleagues to conference events on AI existential safety. Talk with your bosses about making it a priority.

Thanks,

Concerned person

7

TemetN t1_j94tv0c wrote

Hassabis mentioned the scaling thing something like six-ish months ago, which as far as I understood meant they were working on a sort of Gato-2, but it takes time. It's worth a reminder we still haven't seen GPT4, though it wouldn't surprise me to see both GPT4 and Gato 2 this year (in pointed fact that's my default).

3

airduster_9000 t1_j95nbix wrote

https://www.deepmind.com/publications/a-generalist-agent

Published
November 10, 2022

Abstract
Inspired by progress in large-scale language modeling, we apply a similar approach towards building a single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens. In this report we describe the model and the data, and document the current capabilities of Gato.

Conclusions
Transformer sequence models are effective as multi-task multi-embodiment policies, including for real-world text, vision and robotics tasks. They show promise as well in few-shot out-of-distribution task learning. In the future, such models could be used as a default starting point via prompting or fine-tuning to learn new behaviors, rather than training from scratch.Given scaling law trends, the performance across all tasks including dialogue will increase with scale in parameters, data and compute. Better hardware and network architectures will allow training bigger models while maintaining real-time robot control capability. By scaling up and iterating on this same basic approach, we can build a useful general-purpose agent.

3

No_Ninja3309_NoNoYes t1_j95k1m5 wrote

That's reinforced learning, right? My friend Fred says that RL is more fragile than supervised learning. Has to do with the flexible nature of RL. It's good enough for some games, though.

1