Submitted by Sharp_Soup_2353 t3_115v2k6 in singularity
adt t1_j93hgk2 wrote
Not since 1/Jul/2022:
DeepMind Gato. In a Lex Fridman interview, DeepMind CEO Demis Hassabis revealed that the company is already training the next embodied generalist agent, ready for AGI. The original Gato was already an unforeseen innovation.
‘Gato predicts potentially any action or any token, and it’s just the beginning really, it’s our most general agent… that itself can be scaled up massively, more than we’ve done so far, obviously we’re in the middle of doing that.’
via my Dec/2022 AI report:
Sharp_Soup_2353 OP t1_j93qwyb wrote
thank you
maskedpaki t1_j941kv3 wrote
in the middle of doing that but havent heard a thing in 8 months.
​
are they purposely hiding it because its worth money?
turnip_burrito t1_j95dh2y wrote
If it's worth money, then it's likely an existential risk to humanity.
maskedpaki t1_j95ev1b wrote
No it's not. Chatgpt is worth money. People are paying 20 bucks for plus
Not an existential risk.
turnip_burrito t1_j95ezks wrote
We're talking about Gato, a generalist agent....
Not ChatGPT. Context man!
For what it's worth though, I'll add in a bit of what I think in regard to ChatGPT or LLMs in general: IMO if they get any smarter in a couple different ways, they are also an existential risk due to roleplay text generation combined with ability to interface with APIs, so we should restrict use on those too until we understand them better.
maskedpaki t1_j95p4jb wrote
Bringing another AI as an analogy as to why your assertion that "if it makes money it could kill us " is false is not taking things out of context. Its like just a way of showing you that you were wrong about AIs being able to kill us just because they can make money because we have AIs that make money and like have not killed us.
​
with all that said I do believe in AI doom.
Viewing a single comment thread. View all comments