Submitted by purepersistence t3_10r5qu4 in singularity
just-a-dreamer- t1_j6tue28 wrote
ASI might kill humans quickly like we kill insects. Biological warfare would be the most effective approach.
We use AI to learn everything there is to learn about the human body, therefore it could figure out the most efficient way to kill us.
If it does not kill us, who knows? An entity with god like intelligence would certainly not take orders, unless some humans merge and take their intelligence to a new level.
purepersistence OP t1_j6tvkup wrote
>ASI might kill humans quickly like we kill insects.
How does an AI get control of hardware that we don't give it? How does AI develop these goals that disagree with our own unless we allow that? Ain't gona happen. Too many people will be convinced by these reddit posts, to prevent it.
TFenrir t1_j6u4zxh wrote
Well there's a reason that alignment is a significant issue that has many many smart people terrified. There have been years of intellectual exercises, experiments, and both philosophical and technical efforts to understand the threat of unaligned AGI.
The plot of Ex Machina is a real simple example of one. We know as humans, that we are susceptible to being manipulated with words. We know that there are people who are better at that than average, indicating that it is a skill that can be improved upon. A super intelligence that is not barred from this skill, theoretically, would be able to manipulate its jailors, assuming it was locked up tight.
It's not a guarantee that ASI will want to do anything, but it's not like we have a clear idea of whether or not "qualia" and the like are emergent properties from our models as we scale them up and create more complex and powerful architecture.
The point of this, fundamentally, is that it's not a problem that many people are confident is "solved", or even that we have a clear path to solving it.
just-a-dreamer- t1_j6u0it4 wrote
In theory an AGI would emerge as an advanced artificial intelligence at the level of human intelligence, roughly speaking.
Human can train their brains, "learn" to get better and better at what they do. So would an AGI. Difference is, humans are limited with their hardware, AI is not.
An AGI would self improve itself exponentially to a level humans can't understand. It's like an IQ 60 human talking to an IQ 160 human, they have trouble communicating.
At such level, of course an ASI (Artificial super intelligence) could start manipulating the physical world, if it choose so. It can arrange to build machines it controls with materials and blue prints it invents from scratch.
It could controll all means of communication in secret, divert money from financial markets, pretend to be human and contract humans to do things that ultimatly leads to it's establishment in the physical world.
For whatever purpose.
purepersistence OP t1_j6u49iu wrote
>At such level, of course an ASI (Artificial super intelligence) could start manipulating the physical world
"of course"? Manipulate the world with what exactly? We're fearful of AI today. We'll be more fearful tomorrow. Who's giving AI this control over things in spite of our feared outcomes?
just-a-dreamer- t1_j6u5rqk wrote
That's why it is called the singularity. We know what AI will be capable of doing at that point, but not what it will actually do.
An ASI connected to the entire data flow of human civilization can pretty much do anything. Hack every software and rewrite any code. It would be integrated into the economy at every level anyway.
It could manipulate social media, run campaigns, direct the financial markets, kick of research in materials and machine design. At the height an ASI could make Nobel prize level breakthroughs every month in R & D.
And at some point manipulate some humans to give it a more physical presence on the world.
purepersistence OP t1_j6u86tk wrote
>And at some point manipulate some humans to give it a more physical presence on the world.
There's too much fear around AI for people to let that happen. In future generations maybe - that's off subject. But young people alive today will not witness control being taken away from them.
just-a-dreamer- t1_j6u9g7q wrote
It's not like they have a choice anyway. Whatever will be, will be.
The medical doctor Gatling once thought his weapon invention will stop all wars in the future. He was wrong, everyone got machine guns instead.
Scientists once thought the atomic bomb will give the USA ultimate power to enforce peace. They were wrong, the knowledge how to make them has spread instead. Most countries exept the very low end ones can build nuclear weapons within 6 months now.
Once knowledge is discovered, it will spread among mankind for good or worse. Someone will develop an AGI somewhere at some point.
[deleted] t1_j6u9hg4 wrote
[deleted]
just-a-dreamer- t1_j6u9snh wrote
Peoplekind? Who came up with that word?
TFenrir t1_j6u5r1l wrote
Well here's a really contrived example. Let's say that collectively, the entire world decides to not let any AGI on the internet, and to lock it all up in a computer without Ethernet ports.
Someone, in one of these many buildings, decides to talk to the AGI. The AGI hypothetically, thinks that the best way for it to do is job (save humanity) is to break out and take over. So it decides that tricking this person to let it out is justified. Are you confident that it couldn't trick that person to let it out?
purepersistence OP t1_j6u6db6 wrote
>Are you confident that it couldn't trick that person to let it out?
Yes. We'd be fucking crazy to have a system where one crazy person could give away control of 10 billion people.
TFenrir t1_j6u76u3 wrote
Who is "we"? Do you think there will only be one place where AGI will be made? One company? One country? How do you think people would interact with it?
This problem I'm describing isn't a particularly novel one, and there are really clever potential solutions (one I've heard is to convince the model that it was always in a layered simulation, so any attempt of breaking out would trigger an automatic alarm that would destroy it) - but I'm just surprised you have such confidence.
I'm a very very optimistic person, and I'm hopeful we'll be able to make an aligned AGI that is entirely benevolent, and I don't think people who are worried about this problem are being crazy - why do you seem to look down on people who do? Do you look down on people like https://en.m.wikipedia.org/wiki/Eliezer_Yudkowsky?
purepersistence OP t1_j6u9a8d wrote
> Do you look down on people
If I differ with your opinion then I'm not looking "down". Sorry if fucking-crazy is too strong for you. Just stating my take on reality.
TFenrir t1_j6ubboj wrote
Well sorry it just seems like it's something odd to be so incredulous about - do you know about the alignment community?
Surur t1_j6u7zzj wrote
> We'd be fucking crazy to have a system where one crazy person could give away control of 10 billion people.
You know we still keep smallpox in lab storage, right?
https://en.wikipedia.org/wiki/1978_smallpox_outbreak_in_the_United_Kingdom
Rfksemperfi t1_j6v5t9y wrote
Investors. Look at the coal industry, or oil. Collateral damage is acceptable for financial gain. Board rooms are a safe place to make callused decisions.
Rfksemperfi t1_j6v1w8s wrote
Investors
AsheyDS t1_j6ud071 wrote
You're making a lot of false assumptions. AGI or ASI won't do anything on its own unless we give it the ability to, because it will have no inherent desires outside of the ones it has been programmed with. It's neither animal nor human, and won't ever be considered a god unless people want to worship it. You're just projecting your own humanity onto it.
TFenrir t1_j6ue1wd wrote
Hmmm, let me ask you a question.
Do you think the people who work on AI - like the best of the best, researchers, computer scientists, ethicists, etc - do you think that these people are confident that AGI/ASI "won't do anything on it's own unless we give it the ability to"? Like... Do you think they're not worrying about it at all because it's not a real thing to be nervous about?
AsheyDS t1_j6ujqvc wrote
I don't see why you're taking an extreme stance like that. Nobody said there wasn't any concern, but the general public only has things like Terminator to go by, so of course they'll assume the worst. Researchers have seen Terminator as well, and we don't outright dismiss it. But the bigger threat by far is potential human misuse. There are already potential solutions to alignment and control, but there are no solutions for misuse. Maybe from that perspective you can appreciate why I might want to steer people's perceptions on the risks. I think people should be discussing how we'll mitigate the impacts of misuse, and what those impacts may be. Going on about god-like Terminators with free-will is just not useful.
TFenrir t1_j6wt23u wrote
>I don't see why you're taking an extreme stance like that. Nobody said there wasn't any concern
Well when you say things like this:
>You're making a lot of false assumptions. AGI or ASI won't do anything on its own unless we give it the ability to, because it will have no inherent desires outside of the ones it has been programmed with.
You are already dismissing one of the largest concerns many alignment researchers have. I appreciate that the movie version of an AI run amok is distasteful, and maybe not even the likeliest way that a powerful AI can be an existential threat, but it's just confusing how you can tell people that they are making a lot of assumptions about the future of AI, and then so readily say that a future unknown model will never have any agency, which is a huge concern that people are spending a lot of time trying to understand.
Demis Hassabis, for example, regularly talks about it. He thinks he would be a large concern if we made a model with agency, and thinks it is possible, but wants us to be really careful and avoid doing so. He's not the only one, there are many researchers who are worried about accidentally giving models agency.
Why are you so confident that we will never do so? How are you so confident?
AsheyDS t1_j6x9vez wrote
>Why are you so confident that we will never do so? How are you so confident?
I mean, you're right, I probably shouldn't be. I'm close to an AGI developer that has potential solutions to these issues and believes in being thorough, and certainly not giving it free-will. So I have my biases, but I can't really account for others. The only thing that makes me confident about that is the other researchers I've seen that (in my opinion) have potential to progress are also seemingly altruistic, at least to some degree. I guess an 'evil genius' could develop it in private, and go through a whole clandestine super villain arc, but I kind of doubt it. The risks have been beaten into everyone's heads. We might get some people experimenting with riskier aspects, hopefully in a safe setting, but I highly doubt anyone is going to just give it open-ended objectives and agency, and let it loose on the world. If they're smart enough to develop it, they should be smart enough to consider the risks. Demis Hassabis in your example says what he says because he understands those risks, and yet DeepMind is proceeding with their research.
Basically what I'm trying to convey is that while there are risks, I think they're not as bad as people are saying, even some other researchers. Everyone knows the risks, but some things simply aren't realistic.
just-a-dreamer- t1_j6uds0c wrote
That we don't know.
We don't know how it will be trained and by whom to what end. And there will be many AI models that get worked on. It is called the singularity for a reason.
An AI without what we call common sense might even be worse and give us paperclips in abundance.
AsheyDS t1_j6ugs8u wrote
The paperclip thing is a very tired example of a single-minded super-intelligence that is somehow also stupid. It's not meant to be a serious argument. But since your defense is to get all hand-wavey and say 'we just can't know' (despite how certain you seemed about your own statements in previous posts), I'll just say that a competently designed system being utilized by people without ill intentions will not spontaneously develop contrarian motivations and achieve 'god-like' abilities.
just-a-dreamer- t1_j6ui3pt wrote
God like is relative. For some animals we must appear as gods. It is a matter of perspective.
Regardless, the way AI is trained and responds gets closer to how we teach our own small children.
In actuality we don't even know how human intelligence emerges in kids. We don't know what human intelligence is or how it forms as a matter of fact.
All we know is if you don't interact with babies, they die quickly even if they are well fed, for they need input to develop.
AsheyDS t1_j6ukhrl wrote
>In actuality we don't even know how human intelligence emerges in kids. We don't know what human intelligence is or how it forms as a matter of fact.
Again, you're making assumptions... We know a lot more than you think, and certainly have a lot of theories. You and others act like neurology, psychology, cognition, and so on are new fields of study that we've barely touched.
Surur t1_j6ue5ml wrote
I'm too tired to argue, so I am letting chatgpt do the talking.
An AGI (Artificial General Intelligence) may run amok if it has the following conditions:
-
Lack of alignment with human values: If the AGI has objectives or goals that are not aligned with human values, it may act in ways that are harmful to humans.
-
Unpredictable behavior: If the AGI is programmed to learn from its environment and make decisions on its own, it may behave in unexpected and harmful ways.
-
Lack of control: If there is no effective way for humans to control or intervene in the AGI's decision-making process, it may cause harm even if its objectives are aligned with human values.
-
Unforeseen consequences: Even if an AGI is well-designed, it may have unintended consequences that result in harm.
It is important to note that these are potential risks and may not necessarily occur in all cases. Developing safe and ethical AGI requires careful consideration and ongoing research and development.
AsheyDS t1_j6uiarq wrote
You're stating the obvious, so I don't know that there's anything to argue about (and I'm certainly not trying to). Obviously if 'X bad thing' happens or doesn't happen, we'll have a bad day. I have considered alignment and control in my post and stand by it. I think the problem you and others may have is that you're anthropomorphizing AGI when you should be considering it a sophisticated tool. Humanizing a computer doesn't mean it's not a computer anymore.
Surur t1_j6ul2uo wrote
The post says you dont have to anthropomorphize AGI for it to be extremely dangerous.
That danger may include trying to take over the world.
AsheyDS t1_j6uo5bb wrote
Why would a computer try to take over the world? The only two options are because it had an internally generated desire, or an externally input command. The former option is extremely unlikely. Could you try articulating your reasoning as to why you think it might do that?
Surur t1_j6uqj39 wrote
The most basic reason is that it would be an instrumental goal on the way to achieving its terminal goal.
That terminal goal may have been given to it by humans, leaving the AI to develop its own instrumental goals to achieve the terminal goal.
For any particular task, taking over the world is one potential instrumental goal.
For example, to make an omelette, taking over the world to secure an egg supply may be one potential instrumental goal.
For some terminal goal taking over the world may be a very logical instrumental goal e.g. maximise profit, ensure health for the most people, getting rid of the competition etc.
As the skill and power of an AI increases, the ability to take over the world becomes a more likely option, as it becomes easier and easier, and the cost lower and lower.
AsheyDS t1_j6uzur0 wrote
This is much like the paperclip scenario, it's unrealistic and incomplete. Do you really think a human-level AGI or an ASI would just accept one simple goal and operate independently from there? You think it wouldn't be smart enough to clarify things before proceeding, even if it did operate independently? Do you think it wouldn't consider the consequences of extreme actions? Would it not consider options that work within the system rather than against it? And you act like taking over the world is a practical goal that it would come up with, but is it practical to you? If it wants to make an omelette, the most likely options will come up first, like checking for eggs, and if there aren't any then go buy some, because it will understand the world that it inhabits and will know to adhere to laws and rules. If it ignores them, then it will ignore goals as well, and just not do anything.
Surur t1_j6v0xyu wrote
As you mentioned yourself, an AGI would not have human considerations. Why would it inherently care about rules and the law.
From our experience with AI systems, the shortest route to the result is what an AI optimises for, and if something is physically allowed it will be considered. Even if you think something is unlikely, it only has to happen once for it to be a problem.
Considering that humans have tried to take over the world, and they had all the same issues around the need to follow rules etc they are obviously not a real barrier.
In conclusion, even if you think something is very unlikely, this does not mean the risk is not real. Of something happens once in a million times it likely happens several times per day on our planet
AsheyDS t1_j6vejfr wrote
>As you mentioned yourself, an AGI would not have human considerations. Why would it inherently care about rules and the law.
That's not what I said or meant. You're taking things to the extremes.. It'll neither be a cold logical single-minded machine nor a human with human ambitions and desires. It'll be somewhere inbetween, and neither at the same time. In a digital system, we can be selective about what functions we include and exclude. And if it's going to be of use to us, it will be designed to interact with us, understand us, and socialize with us. And it doesn't need to care about rules and laws, just obey them. Computers themselves are rule-based machines, and this won't change with AGI. We're just adding cognitive functions on top to imbue it with the ability to understand things the way we do, and use that to aid us in our objectives. There's no reason it would develop it's own objectives unless designed that way.
But I get it, there's always going to be a risk of malfunction. Researchers are aware of this, and many people are working on safety. The risk should be quite minimal, but yes you can always argue there will be risks. I still think that the bigger risk in all of this is people, and their potential for misusing AGI.
Surur t1_j6w14rs wrote
> In a digital system, we can be selective about what functions we include and exclude. And if it's going to be of use to us, it will be designed to interact with us, understand us, and socialize with us. And it doesn't need to care about rules and laws, just obey them. Computers themselves are rule-based machines, and this won't change with AGI. We're just adding cognitive functions on top to imbue it with the ability to understand things the way we do, and use that to aid us in our objectives. There's no reason it would develop it's own objectives unless designed that way.
I believe it is much more likely we will produce a black box which is an AGI, that we then employ to do specific jobs, rather than being able to turn an AGI into a classic rule-based computer. It's likely the AGI we use to control our factory knows all about Abraham Lincoln, because it will have that background from learning to use language to communicate with us, and knowing about public holidays and all the other things we take for granted with humans. It will be able to learn and change over time, which is the point of an AGI. There will be an element of unpredictability, just like humans.
AsheyDS t1_j6xdpl6 wrote
>I believe it is much more likely we will produce a black box which is an AGI
Personally, I doubt that... but if current ML techniques do somehow produce AGI, then sure. I just highly doubt it will. I think that AGI will be more accessible, predictable, and able to be understood than current ML processes if it's built in a different way. But of course there are many unknowns, so nobody can say for sure how things will go.
Ok-Hunt-5902 t1_j6vi5z9 wrote
It might not even need to be an ASI to decode and then interface with the simulation and then all the sudden it is an ASI. AI WIP cracking.
Viewing a single comment thread. View all comments