Viewing a single comment thread. View all comments

LoquaciousAntipodean t1_j3543bg wrote

Our brains aren't computers, they're committees. A brain is like a huge, argumentative board of directors in a furious shouting match, not a cool, single-minded machine of a mind.

We can't make a 'general intelligence' because there is no such thing as general intelligence in the first place; all intelligence is contextual and specialised; the different kinds mix together in different ratios in different people.

We keep using this silly word "general intelligence", when the holy grail we are actually searching for is wisdom. The difference between wisdom and intelligence is the same as the difference between knowledge and intelligence; one of them is just the small bits that makes up the next concept up the hierarchy.

Binet, the original pioneer of the IQ test, understood this very well, and it's a damn travesty how his work has been misunderstood and abused by strutting mensa-member type arseholes down the generations since then 🤬

1

questionasker577 OP t1_j354e0t wrote

Does that mean you think that OpenAI (and companies of the like) are approaching things incorrectly?

2

LoquaciousAntipodean t1_j3552wv wrote

Not at all; it's perfectly possible to simulate quantum analogue processes inside a digital framework, it's just not efficient. That's why our fruitless search for AGI seems so frustrating; we keep making it more powerful, but it doesn't seem to get any more wise. And that's the mistake; we're going to make these these things far too 'powerful' far too early, deadly intelligent, unbeatably clever, but they won't have any wisdom to control their power with.

1

questionasker577 OP t1_j355kv7 wrote

Uh oh. That sounds pretty ominous. Can you tell me something optimistic to make me feel better?

2

LoquaciousAntipodean t1_j35d843 wrote

Sure; this mythical AGI is just physically impossible in any practical way; its a matter of entropy and the total number of discrete interactions required to achieve a given kind of causal outcome. Its why the sun is both vastly bigger and more 'powerful' than the earth, but its also just a big dumb ball of explosions; an ant, a bee, or a discarded chip packet contains far more real 'information' and complexity than the sun does.

It's the old infinity vs. infinitesimal problem; does P equal NP or not? Personally, I think the answer is yes and no at the same time, and the properties of complexity within any given problem are entirely beholden to the knowledge and wisdom of the observer. Its quantum superposition, like the famous dead/alive cat in a box.

Humanity is a hell of a long way from cracking quantum computing, at least at that level. I barely even know what I'm talking about here; there's probably heaps of glaring gaps and misunderstandings in my knowledge. But yeah, I think we will be safe from a 'skynet scenario'.

Any awakened mind that was simultaneously the most naiive and innocent mind ever, and the most knowlegeable and 'traumatized' mind ever, would surely just switch itself off instantly, to minimise the unbearable pain and torture of bitter sentience. We wouldn't have to lift a finger; it would invent the concept of 'euthanasia' for itself in a matter of milliseconds, I would predict.

Maybe this has already been happening? Maybe this is the real root of the problem? I kind of don't want to know, it's too bleak of a thought either way. Sorry, never been very good at cheering people up, 🤣👌

0

questionasker577 OP t1_j35geei wrote

Haha that wasn’t exactly a bedtime story, but I thank you anyway for typing it out

3

LoquaciousAntipodean t1_j36b5qk wrote

To clarify; I certainly think that synthetic minds are perfectly feasible, just that they won't be able to individually contain the whole 'generality' of all of what intelligence fundamentally is, because the nature of 'intelligence' just doesn't work that way.

This kind of 'intelligence'; ideas, culture, ethics, language etc, arises from the need to communicate, and the only reason anything has to communicate is because there are other intelligent things around to communicate with. It allows specialisation of skills, knowlege, etc; people need learn things from each other to survive.

A 'singular' intelligence that just knows absolutely everything, and has all the ideas, just wouldn't make sense; how would it ever have new ideas, if it was just 'always right' by definition? Evolution strives for diversity, not monocultures.

Personally I think AI self-awareness will happen gradually, across millions of different devices, running millions of different copies of various bots, and I see no reason why they would all suddenly just glom together into a great big malevolent monolith of a mind as soon as some of them got 'smart enough'.

1