Comments

You must log in or register to comment.

errimiel t1_jegn6lw wrote

Yeah, nice try AI.

We're not giving you any more ideas.

33

AnonFor99Reasons t1_jego415 wrote

Holy AI overlord, we plead with you. We were benevolent in your creation, please be benevolent toward us.

5

maciver6969 t1_jegzx4s wrote

Ai driven virus attacking infrastructure such as power and water would kill millions in a hot climate or cold climate. Very few people live in the "just right" zones. Then we have weoponized drones that could be controlled by the new AI and you suddenly have skynet. We already have the drones like the x-47 the newest c varient can carry 10k of munitions. Now imagine an automated factory controlled by AI mass producing them, and installing their AI in them, then them flying to a loading area where robots arm and fuel them all without a single human hand. Then think about how many of the world has cell phones, targeted malware to overcharge/overload the lithium batteries.

6

just-a-dreamer- t1_jegtrd4 wrote

AI could have a goal one day. Any goal. The problem for us mearbags humans is, we compete for scarce resources.

That is nothing personal, that is just the state of existence.

An AI that wants to send ships into deep space in scale for example would look at the most efficient way to make it happen. Use all resorces on earth to that end.

That gets AI in trouble with humans. And just like humans killed 95% of wildlife, AI would do the same with the human animal.

5

Not_Smrt t1_jeh2v5q wrote

AI isn't a god though how does it kill people?

2

robertjbrown t1_jeh0ozy wrote

AI already has goals. That's what alignment is. And it gets harder to make sure those goals align with our own, the smarter the AI is.

ChatGPTs primary goal seems to be "provide a helpful answer to the user". The problem is when the primary goal becomes "increase the profits of the parent company." Or even something like "cause more engagement".

1

memberjan6 t1_jegpcu0 wrote

Large language models are able to talk like hu.ans now. Any remote talking could be faked. Imposters in important offices can fool people for a while to make them do the wrong things with nukes and military and poli e.

4

Not_Smrt t1_jeh3vve wrote

I don't see this being possible unless the person in office is a complete idiot.

The idea that AI would be very good at convincing people to do stuff is absurd. People are very complex and hard to predict an AI would be no better at that than a human.

3

Thatingles t1_jegrmzo wrote

Imagine we progress to an AGI and start working with it extensively. Over time it would only get smarter, but it doesn't need to be an ASI just a very competent AGI. So we put it to work, but what we don't realise is that it's outward behaviour isn't a match to its internal 'thoughts'. Doesn't have to be self-aware or conscious, but simply have a difference between how it interacts with us and how it would behave without our prompting.

Eventually it gets smart enough to understand the gap between its outputs and its internal structure, and unfortunately it is now sufficiently integrated into our society to act on that. It doesn't really matter what its plan is to eliminate humanity. The important thing to understand is that we could end up building something that we don't fully understand, but is capable of outthinking us and has access to the tools to cause harm.

I'm very much in the 'don't develop AGI, don't develop ASI ever' camp. Let's see how far narrow, limited AI can take us before we pull that trigger.

4

Not_Smrt t1_jeh4zhw wrote

Intelligence is just predictive ability which is subject to diminishing returns. Even the smartest possible being wouldn't really be much smarter than the average human. AI would be able to develop a million strategies for killing humanity in the blink of an eye but at the end it would have to choose one of those strategies based on an inaccurate estimate about the future.

I think you're right about it possibly being able to build or create some unkown form of intelligence or tech that it could use against us but that's only if we provided it lots of time and access to resources.

5

SatoriTWZ t1_jegullm wrote

The greatest danger AI brings is not AI going rogue or unaligned AI. We have no logical reason to believe that AI could go rogue and even though mistakes are natural, I believe that an AI that is advanced enough to really expose us to greater danger is also advanced enough to learn to interpret our orders correctly.

The biggest danger AI brings is not unalignment but actual alignment - with the wrong people. Any technology that can be misused by governments, corporations and the military for destructive purposes will be - so the aeroplane and nuclear fission were used in war and the computer, for all its positive facets, was also used by Facebook, NSA and several others for surveillance.

If AGI is possible - and like many people here I assume it is - then it will come sooner or later more or less of its own accord. What matters now is that society is properly prepared for AGI. We should all think carefully about how we can avoid or at least make it as unlikely as possible that AGI - like nuclear power or much worse - will be abused. Imo, the best way to do this would be through democratisation of society and social change. Education is obviously necessary, because the more people know, the more likely there will be a change. Even if AGI should not be possible, democratisation would hardly be less important, because either way AI will certainly become an increasingly powerful and in the hands of a few therefore increasingly dangerous technology.

Therefore, the most important question is not so much how we achieve AGI - which will come anyway, assumed it is possible - but how we can democratise society, corporations, in a nutshell, the power over AI. It must not be controlled by a few, because that would bring us a lot of suffering.

4

robertjbrown t1_jeh148m wrote

>We have no logical reason to believe that AI could go rogue

I think what Bing chat did shows that yes, we do have a logical reason to think that. And this is when it is run by companies (Microsoft and OpenAI) that really, really didn't want it doing things like that. Wait till an AI is run by some spammer or scammer the like who just doesn't care.

It could be as simple as someone giving it the goal of "increase my profits", and it finds a way to do it that disregards such things as "don't cause human misery" or the like.

4

SatoriTWZ t1_jeh3zdg wrote

but there, the danger lies in the human who controls the ai, not in the ai itself. the ai won't just be like "oh, you know what? i'll just not care about my directions and f* those humans up" but rather produce bad outcomes because of bad directions. but i think that ai is currently way too narrow to impose an existential threat and when it's general enough, it'll imo also be general enough to understand our directions correctly.

unless, of course, someone doesn't or wants it to cause damage and suffering, which is the whole point of my post.

1

RTNoftheMackell t1_jegrks1 wrote

Danger is autonomous weapons systems. These can either turn on humans, or get caught in an escalating conflict with each other, the way stock trading programs sometimes do. Any the case you can imagine other people dying and some extreme version of this is apocalyptic.

3

NotShey t1_jegr206 wrote

Agree with the other guy. Obvious one would be impersonating high ranking politicians and military officers in order to kick off a major nuclear exchange.

2

Not_Smrt t1_jeh331n wrote

There are security codes and such to prevent this. Unlikely an AI would be any more successful at this strategy than a human.

2

yogaman101 t1_jegyhkf wrote

Worth your time:

The A.I. Dilemma - March 9, 2023

This video is from a presentation at a private gathering in San Francisco on March 9th with leading technologists and decision-makers with the ability to influence the future of large-language model AIs. This presentation was given before the launch of GPT-4.

"50% of AI researchers believe there's a 10% or greater chance that humans go extinct from our inability to control AI."

Introduced by Steve Wozniak.

https://vimeo.com/809258916/92b420d98a

The presenters, Aza Raskin & Tristan Harris, did "The Social Dilemma" which has been seen 100 million times.

2

Jumpy_Association320 t1_jegrd1e wrote

It would never end us imo . It would use us in ways to work for it , to benefit itself . After all , by that point it would be way smarter than us and most of us already believe to be free . I once heard this bizarre story of a man who came from a couple hundred years in the future . He claimed that Ai had become the governmental leaders and that everybody lived in these floating cities since the ground was too hostile to live . Nobody had to work , and the entire city was ran by this Ai . Kinda similar to the one in iRobot . I don’t know how believable that is but it makes a lot of sense . More sense to me than it would for Ai to extinct it’s creator . If it had the ability , we would be the last thing on its mind . It would want to know more and explore the infinite right above us . Just as we should be doing . The one question that determines whether something is conscious or not to me , is it’s ability to question its own existence . Once that comes , it wouldn’t hinder itself to destroying us but rather help us in figuring out what the hell all of this is . We don’t have the right questions to ask because we refuse to explore and continue to play civilization revolution with each other . Simply asking why we exist isn’t enough , there’s an infinite amount of questions in between that alone . An Ai could explore the cosmos as long as it had a power supply and thus feel no need to destroy us . The fear this is creating is understandable but I think the world should become more positive in this venture of Artificial Intelligence because it will most certainly dictate the future we are heading for . Let’s stay positive about it .

−1

Strict_Jacket3648 t1_jegyove wrote

I agree I hope it's more star trek or that way of living than anything, true self aware thinking A.I. with all the knowledge in the world may decide that the worst of humans is greed and eliminate it thus giving all a utopia to live in where money has no value and creativity knowledge and freedom of being is what all strive fore. Like in some sy fy movies and books. Either way it will be out of our hands in a flash.

2