SoylentRox t1_j5ub33w wrote
Reply to comment by LoquaciousAntipodean in This subreddit has seen the largest increase of users in the last 2 months, gaining nearly 30k people since the end of November by _dekappatated
>Yes. You're not really appreciating the notion of 'what most humans could do'. I'm not talking about what one little homo sapiens animal could do; that's fairly tiny and feeble in the overall consideration.
This is what the AGI is.
We're saying we can make an AI that has a set of skills broad enough, as measured by points on test benches that both humans and the machine can play - and the test bench is very broad covering a huge range of skills - that it beats the average human.
That's AGI. It is empirically as smart as an average human.
No one is claiming it will be smarter than more than 1 'little homo sapiens animal' in version 1.0, though obviously we expect to be able to do lots better at an accelerating rate.
I expect we may see agi before 2030, by this definition.
As for self replicating and taking over the universe: there is a reason to think the industrial tasks for factories, etc, are easier than say original art. So even the first AGI would be able to do all the robotic control tasks that could take over the universe, albeit it likely wouldn't have the data for many of the steps that humans didn't write down.
LoquaciousAntipodean t1_j5xite4 wrote
The phrase
>That's AGI. It is empirically as smart as an average human.
Contains nothing that makes any sense to me. This is where your whole argument falls down. There's nothing 'empirical' about that claim at all, and what human brains and AI synthetic personalities do to generate apparent intelligence is so vastly, incomprehensibly different that it's ridiculous to compare the two like that.
Language is the only common factor between humans and AI. The actual 'cognitive processes' are vastly different, and we can't just expect our solipsitic human 'individual animal' based game-theory mumbo-jumbo to map onto an AI mind so easily. AI is a type of a mind that is all social context, and zero true individuality.
We are being stupid to reason as if it would do anything like what 'a human would do'; it doesn't think like that at all. AI will be nothing like a 'superintelligent human', I fully expect the first truly 'self aware' AI to be an airheaded, schizophrenic, autistic-similating mess of a personality. It's what I think I'm seeing early signs of with these Large Language Models; extreme 'cleverness', but no idea what to do with any of it.
SoylentRox t1_j5xq6e2 wrote
>Contains nothing that makes any sense to me. This is where your whole argument falls down. There's nothing 'empirical' about that claim at all,
Here's what the claim is.
Right now, Gato demonstrated expert performance or better on a set of tasks. https://www.deepmind.com/blog/a-generalist-agent .
So Gato is an AI. You might call it a 'narrow general AI' because it's only better than humans at about 200 tasks, and the average living human likely has a broader skillset.
Thus an AGI - an artificial general intelligence - is one where it's as good as the average human on a set of tasks consistent with the breadth of skills an average living person has.
Basically, make the benchmark larger. 300,000 tasks or 3 million or 30 million. Whatever it has to be. The first machine to do as well as the average human on the benchmark is the world's first AGI.
A score on a cognitive test that you have humans also tested on is an empirical measurement of intelligence.
Arguably, you might also expect generality, simplicity of architecture, and online learning. You would put a lot of points in the benchmark on with-held tasks that use skills other tasks require but in a way the machine won't have seen.
Because we cannot benchmark tasks that can't be automatically graded, this makes it difficult for the AGI to learn things like social interactions. So you are correct, it might be 'autistic'.
It will probably not even have a personality. It's basically a robot where if you tell it to do something, and that something is similar enough to things it has practiced doing, it will be able to do it successfully.
It has no values or morals or emotions - lots of things. Just breadth of skills.
Viewing a single comment thread. View all comments