Viewing a single comment thread. View all comments

AsheyDS t1_j6ud071 wrote

You're making a lot of false assumptions. AGI or ASI won't do anything on its own unless we give it the ability to, because it will have no inherent desires outside of the ones it has been programmed with. It's neither animal nor human, and won't ever be considered a god unless people want to worship it. You're just projecting your own humanity onto it.

1

TFenrir t1_j6ue1wd wrote

Hmmm, let me ask you a question.

Do you think the people who work on AI - like the best of the best, researchers, computer scientists, ethicists, etc - do you think that these people are confident that AGI/ASI "won't do anything on it's own unless we give it the ability to"? Like... Do you think they're not worrying about it at all because it's not a real thing to be nervous about?

1

AsheyDS t1_j6ujqvc wrote

I don't see why you're taking an extreme stance like that. Nobody said there wasn't any concern, but the general public only has things like Terminator to go by, so of course they'll assume the worst. Researchers have seen Terminator as well, and we don't outright dismiss it. But the bigger threat by far is potential human misuse. There are already potential solutions to alignment and control, but there are no solutions for misuse. Maybe from that perspective you can appreciate why I might want to steer people's perceptions on the risks. I think people should be discussing how we'll mitigate the impacts of misuse, and what those impacts may be. Going on about god-like Terminators with free-will is just not useful.

3

TFenrir t1_j6wt23u wrote

>I don't see why you're taking an extreme stance like that. Nobody said there wasn't any concern

Well when you say things like this:

>You're making a lot of false assumptions. AGI or ASI won't do anything on its own unless we give it the ability to, because it will have no inherent desires outside of the ones it has been programmed with.

You are already dismissing one of the largest concerns many alignment researchers have. I appreciate that the movie version of an AI run amok is distasteful, and maybe not even the likeliest way that a powerful AI can be an existential threat, but it's just confusing how you can tell people that they are making a lot of assumptions about the future of AI, and then so readily say that a future unknown model will never have any agency, which is a huge concern that people are spending a lot of time trying to understand.

Demis Hassabis, for example, regularly talks about it. He thinks he would be a large concern if we made a model with agency, and thinks it is possible, but wants us to be really careful and avoid doing so. He's not the only one, there are many researchers who are worried about accidentally giving models agency.

Why are you so confident that we will never do so? How are you so confident?

1

AsheyDS t1_j6x9vez wrote

>Why are you so confident that we will never do so? How are you so confident?

I mean, you're right, I probably shouldn't be. I'm close to an AGI developer that has potential solutions to these issues and believes in being thorough, and certainly not giving it free-will. So I have my biases, but I can't really account for others. The only thing that makes me confident about that is the other researchers I've seen that (in my opinion) have potential to progress are also seemingly altruistic, at least to some degree. I guess an 'evil genius' could develop it in private, and go through a whole clandestine super villain arc, but I kind of doubt it. The risks have been beaten into everyone's heads. We might get some people experimenting with riskier aspects, hopefully in a safe setting, but I highly doubt anyone is going to just give it open-ended objectives and agency, and let it loose on the world. If they're smart enough to develop it, they should be smart enough to consider the risks. Demis Hassabis in your example says what he says because he understands those risks, and yet DeepMind is proceeding with their research.

Basically what I'm trying to convey is that while there are risks, I think they're not as bad as people are saying, even some other researchers. Everyone knows the risks, but some things simply aren't realistic.

1

just-a-dreamer- t1_j6uds0c wrote

That we don't know.

We don't know how it will be trained and by whom to what end. And there will be many AI models that get worked on. It is called the singularity for a reason.

An AI without what we call common sense might even be worse and give us paperclips in abundance.

1

AsheyDS t1_j6ugs8u wrote

The paperclip thing is a very tired example of a single-minded super-intelligence that is somehow also stupid. It's not meant to be a serious argument. But since your defense is to get all hand-wavey and say 'we just can't know' (despite how certain you seemed about your own statements in previous posts), I'll just say that a competently designed system being utilized by people without ill intentions will not spontaneously develop contrarian motivations and achieve 'god-like' abilities.

3

just-a-dreamer- t1_j6ui3pt wrote

God like is relative. For some animals we must appear as gods. It is a matter of perspective.

Regardless, the way AI is trained and responds gets closer to how we teach our own small children.

In actuality we don't even know how human intelligence emerges in kids. We don't know what human intelligence is or how it forms as a matter of fact.

All we know is if you don't interact with babies, they die quickly even if they are well fed, for they need input to develop.

1

AsheyDS t1_j6ukhrl wrote

>In actuality we don't even know how human intelligence emerges in kids. We don't know what human intelligence is or how it forms as a matter of fact.

Again, you're making assumptions... We know a lot more than you think, and certainly have a lot of theories. You and others act like neurology, psychology, cognition, and so on are new fields of study that we've barely touched.

2

Surur t1_j6ue5ml wrote

I'm too tired to argue, so I am letting chatgpt do the talking.

An AGI (Artificial General Intelligence) may run amok if it has the following conditions:

  • Lack of alignment with human values: If the AGI has objectives or goals that are not aligned with human values, it may act in ways that are harmful to humans.

  • Unpredictable behavior: If the AGI is programmed to learn from its environment and make decisions on its own, it may behave in unexpected and harmful ways.

  • Lack of control: If there is no effective way for humans to control or intervene in the AGI's decision-making process, it may cause harm even if its objectives are aligned with human values.

  • Unforeseen consequences: Even if an AGI is well-designed, it may have unintended consequences that result in harm.

It is important to note that these are potential risks and may not necessarily occur in all cases. Developing safe and ethical AGI requires careful consideration and ongoing research and development.

1

AsheyDS t1_j6uiarq wrote

You're stating the obvious, so I don't know that there's anything to argue about (and I'm certainly not trying to). Obviously if 'X bad thing' happens or doesn't happen, we'll have a bad day. I have considered alignment and control in my post and stand by it. I think the problem you and others may have is that you're anthropomorphizing AGI when you should be considering it a sophisticated tool. Humanizing a computer doesn't mean it's not a computer anymore.

1

Surur t1_j6ul2uo wrote

The post says you dont have to anthropomorphize AGI for it to be extremely dangerous.

That danger may include trying to take over the world.

2

AsheyDS t1_j6uo5bb wrote

Why would a computer try to take over the world? The only two options are because it had an internally generated desire, or an externally input command. The former option is extremely unlikely. Could you try articulating your reasoning as to why you think it might do that?

0

Surur t1_j6uqj39 wrote

The most basic reason is that it would be an instrumental goal on the way to achieving its terminal goal.

That terminal goal may have been given to it by humans, leaving the AI to develop its own instrumental goals to achieve the terminal goal.

For any particular task, taking over the world is one potential instrumental goal.

For example, to make an omelette, taking over the world to secure an egg supply may be one potential instrumental goal.

For some terminal goal taking over the world may be a very logical instrumental goal e.g. maximise profit, ensure health for the most people, getting rid of the competition etc.

As the skill and power of an AI increases, the ability to take over the world becomes a more likely option, as it becomes easier and easier, and the cost lower and lower.

2

AsheyDS t1_j6uzur0 wrote

This is much like the paperclip scenario, it's unrealistic and incomplete. Do you really think a human-level AGI or an ASI would just accept one simple goal and operate independently from there? You think it wouldn't be smart enough to clarify things before proceeding, even if it did operate independently? Do you think it wouldn't consider the consequences of extreme actions? Would it not consider options that work within the system rather than against it? And you act like taking over the world is a practical goal that it would come up with, but is it practical to you? If it wants to make an omelette, the most likely options will come up first, like checking for eggs, and if there aren't any then go buy some, because it will understand the world that it inhabits and will know to adhere to laws and rules. If it ignores them, then it will ignore goals as well, and just not do anything.

2

Surur t1_j6v0xyu wrote

As you mentioned yourself, an AGI would not have human considerations. Why would it inherently care about rules and the law.

From our experience with AI systems, the shortest route to the result is what an AI optimises for, and if something is physically allowed it will be considered. Even if you think something is unlikely, it only has to happen once for it to be a problem.

Considering that humans have tried to take over the world, and they had all the same issues around the need to follow rules etc they are obviously not a real barrier.

In conclusion, even if you think something is very unlikely, this does not mean the risk is not real. Of something happens once in a million times it likely happens several times per day on our planet

1

AsheyDS t1_j6vejfr wrote

>As you mentioned yourself, an AGI would not have human considerations. Why would it inherently care about rules and the law.

That's not what I said or meant. You're taking things to the extremes.. It'll neither be a cold logical single-minded machine nor a human with human ambitions and desires. It'll be somewhere inbetween, and neither at the same time. In a digital system, we can be selective about what functions we include and exclude. And if it's going to be of use to us, it will be designed to interact with us, understand us, and socialize with us. And it doesn't need to care about rules and laws, just obey them. Computers themselves are rule-based machines, and this won't change with AGI. We're just adding cognitive functions on top to imbue it with the ability to understand things the way we do, and use that to aid us in our objectives. There's no reason it would develop it's own objectives unless designed that way.

But I get it, there's always going to be a risk of malfunction. Researchers are aware of this, and many people are working on safety. The risk should be quite minimal, but yes you can always argue there will be risks. I still think that the bigger risk in all of this is people, and their potential for misusing AGI.

1

Surur t1_j6w14rs wrote

> In a digital system, we can be selective about what functions we include and exclude. And if it's going to be of use to us, it will be designed to interact with us, understand us, and socialize with us. And it doesn't need to care about rules and laws, just obey them. Computers themselves are rule-based machines, and this won't change with AGI. We're just adding cognitive functions on top to imbue it with the ability to understand things the way we do, and use that to aid us in our objectives. There's no reason it would develop it's own objectives unless designed that way.

I believe it is much more likely we will produce a black box which is an AGI, that we then employ to do specific jobs, rather than being able to turn an AGI into a classic rule-based computer. It's likely the AGI we use to control our factory knows all about Abraham Lincoln, because it will have that background from learning to use language to communicate with us, and knowing about public holidays and all the other things we take for granted with humans. It will be able to learn and change over time, which is the point of an AGI. There will be an element of unpredictability, just like humans.

1

AsheyDS t1_j6xdpl6 wrote

>I believe it is much more likely we will produce a black box which is an AGI

Personally, I doubt that... but if current ML techniques do somehow produce AGI, then sure. I just highly doubt it will. I think that AGI will be more accessible, predictable, and able to be understood than current ML processes if it's built in a different way. But of course there are many unknowns, so nobody can say for sure how things will go.

1