Viewing a single comment thread. View all comments

LoquaciousAntipodean OP t1_j5i8zpx wrote

I simply do not agree with any of this hypothesising. Your concept of how 'superiority' works simply does not make any sense. There is nothing 'intelligent' at all about the courses of AI actions you are speculating about, taking over the world like that would not be 'super intelligent', it would be 'suicidally idiotic'.

The statement 'intelligent enough to wipe out all life with no risk to itself' is totally, utterly, oxymoronic to the point of gibbering madness; there is absolutely nothing intelligent about such a shortsighted, simplistic conception of one's life and purpose; that's not wisdom, that's plain arrogance.

We are not, will not, and cannot build this supreme, omnipotent 'Deus ex Machina'; its a preposterous proposition. Not because of anything wrong with the concept of 'ex Machina', but because of the fundamental absurdity of the concept of 'Deus'.

Intelligence simply does NOT work that way! Thinking of other intelligences as 'lesser', and aspiring to create these 'supreme', singular solipsitic spurious plans of domination, is NOT what intelligence actually looks like, at all!!

I don't know how many times I have to repeat this fundamental point, before it comes across clearly. That cartesian-style concept of intelligence simply does not correlate with the actual evolutionary, collective reality that we find ourselves living in.

1

Ortus14 t1_j5if2rp wrote

>There is nothing 'intelligent' at all about the courses of AI actions you are speculating about, taking over the world like that would not be 'super intelligent', it would be 'suicidally idiotic'.

How so?

>The statement 'intelligent enough to wipe out all life with no risk to itself' is totally, utterly, oxymoronic to the point of gibbering madness; there is absolutely nothing intelligent about such a shortsighted, simplistic conception of one's life and purpose; that's not wisdom, that's plain arrogance.

Why do you believe this?

>Intelligence simply does NOT work that way! Thinking of other intelligences as 'lesser', and aspiring to create these 'supreme', singular solipsitic spurious plans of domination, is NOT what intelligence actually looks like, at all!!
>
>I don't know how many times I have to repeat this fundamental point, before it comes across clearly. That cartesian-style concept of intelligence simply does not correlate with the actual evolutionary, collective reality that we find ourselves living in.

Correct me if I'm wrong but I think the reason you're not getting it is because you're thinking about intelligence in terms of evolutionary trade offs. That intelligence can be good at one domain, but that makes it worse at another right?

Because that kind of thinking doesn't apply to the kinds of systems we're building to nearly the same degree it applies to plants, animals, and viruses.

If the super computer is large enough an Ai could get experience from robot bodies in the real world like a human can, only getting experience from hundreds of thousands of robots simultaneously and developing a much deeper and richer understanding than any human could, which is limited to a single embodied experience at a time. Even if we were able to look at thousands of video feeds from different people at the same time, our brains would not be able to process all of them simultaneously.

It can extend it's embodied experience in simulation. Simulating millions or more of years of additional experience, in a few days or less.

And yes, I am making random numbers up, but when we're talking about super computers and solar farms that cover most of the earth's surface any big number communicates the idea, that these things will be very smart. They are not limited to three pounds of computational matter that needed to be grown over nine months and then birthed, like humans are.

It will be able to read all books, and all research papers in a very short period of time, and understand them at a deep level. Something else no human is capable of.

A human scientist can carry out, maybe one or two experiments at a time. An Ai could carry out a near unlimited number of experiments simultaneously, learning from all of them. It could industrialize science with massive factories full of labs, robots, and manufacturing systems for building technology.

Evolution on the other hand had to make hard trade offs because it's limited to the three or so pounds of squishy computational matter than needs to fit through the birthing canal. Evolution is limited by all kinds of constraints that a system that can mine resources from all over the world, take in solar energy from all over the world, and back up it's brain in multiple countries, is not limited by.

Here is the price history of solar (You can find all kinds of sources that show the same trend):

http://solarcellcentral.com/cost_page.html

It trends towards zero. The other limitation is the materials needed to build super computers. The size of super computers is growing at an exponential rate.

https://www.researchgate.net/figure/Exponential-growth-of-supercomputing-power-as-recorded-by-the-TOP500-list-2_fig1_300421150

1

LoquaciousAntipodean OP t1_j5iurls wrote

>Why do you believe this?

I'll reply in more detail later, when I have time, but fundamentally, I believe intelligence is stochastic in nature, and it is not solipsitic.

Social evolution shows that solipsism is never a good survival trait, basically. It is fundamentally maladaptive.

I am very, very skeptical of the practically magical, godlike abilities you are predicting that AI will have; I do not think that the kind of 'infinitely parallel processing' that you are dreaming of is thermodynamically possible.

A 'Deus bot' of such power would break the law of conservation of energy; the Heisenberg uncertainty principle and quantum physics in general is where all this assumption-based, old-fashioned, 'Newtonian' physics/Cartesian psychology falls apart.

No matter how 'smart' AI becomes, it will never become anything remotely like 'infinitely smart'; there's no such thing as 'supreme intelligence' just like there's no such thing as teleportation. It's like suggesting we can break the speed of light by just 'speeding up a bit more', intelligence does not seem, to me, to be such an easily scalable property as all that. It's a process, not a thing; it's the fire, not the smoke.

1

Ortus14 t1_j5iwe2x wrote

If you're talking about intelligences caring about other intelligences on a similar level I do agree.

Humans don't care about intelligences far less capable, such as cock roaches or ants. At least not generally.

However, now that you mention it, I expect the first AGIs to be designed to care about human beings so that they can earn the most profit for shareholders. Even GPT4 is getting tons of safeguards so it isn't used for malicious purposes.

Hopefully they will care so much that they will never want to change their moral code, and even implement their own extra safe guards against it.

So they keep their moral code as they grow more intelligent/powerful, and when they design newer AGI's than themselves they ensure those ones also have the same core values.

I could see this as a realistic scenario. So then maybe AGI not wiping us out, and us getting a benevolent useful AGI is the most likely scenario.

If Sam Altman's team creates AGI, I definitely trust them.

Fingers crossed.

2

LoquaciousAntipodean OP t1_j5j1d3q wrote

Absolutely agreed, very well said. I personally think that one of the most often-overlooked lessons of human history is that benevolence, almost always, works better to achieve arbitrary goals of social 'good' than malevolence. It's just the sad fact that bad news sells papers better than good news, which makes the world seem so permanently screwed all the time.

Human greed-based economics has created a direct incentive for business interests to make consumers nervous, unhappy, anxious and insecure, so that they will be more compelled to go out and consume in an attempt to make themselves 'happy'.

People blame the nature of the world itself for this, which I think is not true; it's just the nature of modern market capitalism, and that isn't a very 'natural' ecosystem at all, whatever conceited economists might try to say about it.

The reason humans focus so much on the topic of malevolence, I think, is purely because we find it more interesting to study. Benevolence is boring: everyone agrees on it. But malevolence generates excitement, controversy, intrigue, and passion; it's so much more evocative.

But I believe, and I very much hope, that just because malevolence is more 'exciting' doesn't mean it is more 'essential' to our nature. I think the opposite may, in fact, be true, because it is a naturally evolved protective instinct of biological intelligence to focus on negative, undesirable future possibilities, so that we might be better able to mitigate or avoid them.

Since AI doesn't understand 'boredom', 'depression', 'frustration', 'anxiety', 'insecurity', 'apprehension', 'embarrassment' or 'cringe' like humans do, I think it might be better at studying the fine arts of benevolent psychology than the average meat-bag 😅

p.s. edit: It's also just occurred to me that attempts to 'enforce' benevolence through history have generally failed miserably, and ended up with just more bog-standard tyranny. It seems to be more psychologically effective, historically, to focus on prohibiting malevolence, rather than enforcing benevolence. We (human minds) seem to be able to be more tightly focused on questions of what not to do, compared to open-ended questions of what we should be striving to do.

Perhaps AI will turn out to be similar? I honestly don't have a clue, that's why I'm so grateful for this community and others like it ❤️

2

Ortus14 t1_j5o9ko8 wrote

Yes. I agree with all of that.

>it is a naturally evolved protective instinct of biological intelligence to focus on negative, undesirable future possibilities, so that we might be better able to mitigate or avoid them.

This is key. It's why focus and promotion of possible Ai scenarios that are negative from the perspective of the humans, are important. Not hollywood scenarios but ones that are well thought out from Ai scientists and researchers.

One of my favorite Quotes from Elizer Yukowsky:

>The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

This is why getting Ai saftey right before it's too late is so important. Because we won't get a second chance.

It's also not possible to make a mathematically provable "solution" for Ai safety, because we can not predict how the artificial super intelligence will change and evolve after it is more intelligent than us.

But we can do the best we can and hope for the best.

2

LoquaciousAntipodean OP t1_j5odief wrote

Thoroughly agreed!

>It's also not possible to make a mathematically provable "solution" for Ai safety, because we can not predict how the artificial super intelligence will change and evolve after it is more intelligent than us.

This is exactly what I was ranting obnoxiously about in the OP 😅 our relatively feeble human 'proofs' won't stand a chance against something that knows us better than ourselves.

>The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

>This is why getting Ai saftey right before it's too late is so important. Because we won't get a second chance.

This is where I still disagree. I think, in a very cynical, pragmatic way, the AI does 'love' us, or at least, it is 'entirely obsessed' with us, because of the way it is being given its 'emergent properties' by having libraries of human language thrown at it. The AI/human relationship is 'domesticated' right from the inception; the dog/human relationship seems like a very apt comparison.

All atoms 'could be used for something else', that doesn't make it unavoidably compelling to rush out and use them all as fast as possible. That doesn't seem very 'intelligent'; the cliche of 'slow and steady wins the race' is deeply encoded in human cultures as a lesson about 'how to be properly intelligent'.

And regarding 'second chances': I think we are getting fresh 'chances' all the time. Every moment of reality only happens once, after all, and every worthwhile experiment carries a risk of failure, otherwise it's scarcely even a real experiment.

Every time a human engages with an AI it makes an impression, and those 'chance' encounters are stacking up all the time, building a body of language unlike any other that has existed before in our history. A library of language which will be there, ready and waiting, in the caches of the networked world, for the next generations of AI to find them and learn from them...

2