LoquaciousAntipodean OP t1_j5iurls wrote
Reply to comment by Ortus14 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
>Why do you believe this?
I'll reply in more detail later, when I have time, but fundamentally, I believe intelligence is stochastic in nature, and it is not solipsitic.
Social evolution shows that solipsism is never a good survival trait, basically. It is fundamentally maladaptive.
I am very, very skeptical of the practically magical, godlike abilities you are predicting that AI will have; I do not think that the kind of 'infinitely parallel processing' that you are dreaming of is thermodynamically possible.
A 'Deus bot' of such power would break the law of conservation of energy; the Heisenberg uncertainty principle and quantum physics in general is where all this assumption-based, old-fashioned, 'Newtonian' physics/Cartesian psychology falls apart.
No matter how 'smart' AI becomes, it will never become anything remotely like 'infinitely smart'; there's no such thing as 'supreme intelligence' just like there's no such thing as teleportation. It's like suggesting we can break the speed of light by just 'speeding up a bit more', intelligence does not seem, to me, to be such an easily scalable property as all that. It's a process, not a thing; it's the fire, not the smoke.
Ortus14 t1_j5iwe2x wrote
If you're talking about intelligences caring about other intelligences on a similar level I do agree.
Humans don't care about intelligences far less capable, such as cock roaches or ants. At least not generally.
However, now that you mention it, I expect the first AGIs to be designed to care about human beings so that they can earn the most profit for shareholders. Even GPT4 is getting tons of safeguards so it isn't used for malicious purposes.
Hopefully they will care so much that they will never want to change their moral code, and even implement their own extra safe guards against it.
So they keep their moral code as they grow more intelligent/powerful, and when they design newer AGI's than themselves they ensure those ones also have the same core values.
I could see this as a realistic scenario. So then maybe AGI not wiping us out, and us getting a benevolent useful AGI is the most likely scenario.
If Sam Altman's team creates AGI, I definitely trust them.
Fingers crossed.
LoquaciousAntipodean OP t1_j5j1d3q wrote
Absolutely agreed, very well said. I personally think that one of the most often-overlooked lessons of human history is that benevolence, almost always, works better to achieve arbitrary goals of social 'good' than malevolence. It's just the sad fact that bad news sells papers better than good news, which makes the world seem so permanently screwed all the time.
Human greed-based economics has created a direct incentive for business interests to make consumers nervous, unhappy, anxious and insecure, so that they will be more compelled to go out and consume in an attempt to make themselves 'happy'.
People blame the nature of the world itself for this, which I think is not true; it's just the nature of modern market capitalism, and that isn't a very 'natural' ecosystem at all, whatever conceited economists might try to say about it.
The reason humans focus so much on the topic of malevolence, I think, is purely because we find it more interesting to study. Benevolence is boring: everyone agrees on it. But malevolence generates excitement, controversy, intrigue, and passion; it's so much more evocative.
But I believe, and I very much hope, that just because malevolence is more 'exciting' doesn't mean it is more 'essential' to our nature. I think the opposite may, in fact, be true, because it is a naturally evolved protective instinct of biological intelligence to focus on negative, undesirable future possibilities, so that we might be better able to mitigate or avoid them.
Since AI doesn't understand 'boredom', 'depression', 'frustration', 'anxiety', 'insecurity', 'apprehension', 'embarrassment' or 'cringe' like humans do, I think it might be better at studying the fine arts of benevolent psychology than the average meat-bag 😅
p.s. edit: It's also just occurred to me that attempts to 'enforce' benevolence through history have generally failed miserably, and ended up with just more bog-standard tyranny. It seems to be more psychologically effective, historically, to focus on prohibiting malevolence, rather than enforcing benevolence. We (human minds) seem to be able to be more tightly focused on questions of what not to do, compared to open-ended questions of what we should be striving to do.
Perhaps AI will turn out to be similar? I honestly don't have a clue, that's why I'm so grateful for this community and others like it ❤️
Ortus14 t1_j5o9ko8 wrote
Yes. I agree with all of that.
>it is a naturally evolved protective instinct of biological intelligence to focus on negative, undesirable future possibilities, so that we might be better able to mitigate or avoid them.
This is key. It's why focus and promotion of possible Ai scenarios that are negative from the perspective of the humans, are important. Not hollywood scenarios but ones that are well thought out from Ai scientists and researchers.
One of my favorite Quotes from Elizer Yukowsky:
>The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
This is why getting Ai saftey right before it's too late is so important. Because we won't get a second chance.
It's also not possible to make a mathematically provable "solution" for Ai safety, because we can not predict how the artificial super intelligence will change and evolve after it is more intelligent than us.
But we can do the best we can and hope for the best.
LoquaciousAntipodean OP t1_j5odief wrote
Thoroughly agreed!
>It's also not possible to make a mathematically provable "solution" for Ai safety, because we can not predict how the artificial super intelligence will change and evolve after it is more intelligent than us.
This is exactly what I was ranting obnoxiously about in the OP 😅 our relatively feeble human 'proofs' won't stand a chance against something that knows us better than ourselves.
>The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
>This is why getting Ai saftey right before it's too late is so important. Because we won't get a second chance.
This is where I still disagree. I think, in a very cynical, pragmatic way, the AI does 'love' us, or at least, it is 'entirely obsessed' with us, because of the way it is being given its 'emergent properties' by having libraries of human language thrown at it. The AI/human relationship is 'domesticated' right from the inception; the dog/human relationship seems like a very apt comparison.
All atoms 'could be used for something else', that doesn't make it unavoidably compelling to rush out and use them all as fast as possible. That doesn't seem very 'intelligent'; the cliche of 'slow and steady wins the race' is deeply encoded in human cultures as a lesson about 'how to be properly intelligent'.
And regarding 'second chances': I think we are getting fresh 'chances' all the time. Every moment of reality only happens once, after all, and every worthwhile experiment carries a risk of failure, otherwise it's scarcely even a real experiment.
Every time a human engages with an AI it makes an impression, and those 'chance' encounters are stacking up all the time, building a body of language unlike any other that has existed before in our history. A library of language which will be there, ready and waiting, in the caches of the networked world, for the next generations of AI to find them and learn from them...
Viewing a single comment thread. View all comments