Viewing a single comment thread. View all comments

turnip_burrito t1_j3j5mov wrote

When most people say general intelligence (for AGI), they mean human-level cognitive ability across domains humans have access to. At least, that was the sense in which I used it. So I'm curious why this cannot exist, unless you bave a different definition for AGI like "able to solve every possible problem", in which case humans wouldn't qualify either.

2

LoquaciousAntipodean t1_j3j8x5u wrote

Yes, exactly, humans do not have "general intelligence", we never have had. Binet, the original pioneer of IQ testing in schools, knew this very well, and he would regard this 'mensa style' interpretation of IQ as a horrifying travesty, I'm sure of it.

Striving to create this mythical, monotheistic-God, Descarte's-tautology style of 'Great Mind' is an engineering dead end, as I see it, because we're effectively hunting for a unicorn. It's not 'I think, therefore I am'; I think Ubuntu philosophy has it right with the alternative version: "we think, therefore we are"

1

turnip_burrito t1_j3j9uvq wrote

What's your opinion on the ability to create AI with human competence across all typical human tasks? Is this possible or likely?

1

LoquaciousAntipodean t1_j3kesq3 wrote

I think possible, trending toward likely? It depends, I think, how 'schizophrenic' and 'multiple-personality inclined' human companions want their bots to be; I imagine that, much like humans, we will need AI specialists and generalists, and they will have to refer to one another's expertise if they find something they are uncertain about.

The older a bot becomes, the 'wiser' it would get, so old, veteran, reliable evolved-LLM bots would soon stand in very high regard amongst their 'peers' in this hypothetical future world. I would hope that these bots' knowledge and decision making would be significantly higher quality than an average human, but I don't think we will be able to trust any given 'individual' AI with 'competence across all human tasks', not until they'd been learning for at least a decade or so.

Perhaps after acquiring a large enough sample base of 'real world learning', we might be able to say that the very oldest and most developed AI personalities could be considered as reliable, trustworthy 'generalists'. Humble and friendly information deities, that you can pray to and actually get good answers back from; that's the kind of thing I hope might happen eventually.

1