Submitted by purepersistence t3_10r5qu4 in singularity
TFenrir t1_j6wt23u wrote
Reply to comment by AsheyDS in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
>I don't see why you're taking an extreme stance like that. Nobody said there wasn't any concern
Well when you say things like this:
>You're making a lot of false assumptions. AGI or ASI won't do anything on its own unless we give it the ability to, because it will have no inherent desires outside of the ones it has been programmed with.
You are already dismissing one of the largest concerns many alignment researchers have. I appreciate that the movie version of an AI run amok is distasteful, and maybe not even the likeliest way that a powerful AI can be an existential threat, but it's just confusing how you can tell people that they are making a lot of assumptions about the future of AI, and then so readily say that a future unknown model will never have any agency, which is a huge concern that people are spending a lot of time trying to understand.
Demis Hassabis, for example, regularly talks about it. He thinks he would be a large concern if we made a model with agency, and thinks it is possible, but wants us to be really careful and avoid doing so. He's not the only one, there are many researchers who are worried about accidentally giving models agency.
Why are you so confident that we will never do so? How are you so confident?
AsheyDS t1_j6x9vez wrote
>Why are you so confident that we will never do so? How are you so confident?
I mean, you're right, I probably shouldn't be. I'm close to an AGI developer that has potential solutions to these issues and believes in being thorough, and certainly not giving it free-will. So I have my biases, but I can't really account for others. The only thing that makes me confident about that is the other researchers I've seen that (in my opinion) have potential to progress are also seemingly altruistic, at least to some degree. I guess an 'evil genius' could develop it in private, and go through a whole clandestine super villain arc, but I kind of doubt it. The risks have been beaten into everyone's heads. We might get some people experimenting with riskier aspects, hopefully in a safe setting, but I highly doubt anyone is going to just give it open-ended objectives and agency, and let it loose on the world. If they're smart enough to develop it, they should be smart enough to consider the risks. Demis Hassabis in your example says what he says because he understands those risks, and yet DeepMind is proceeding with their research.
Basically what I'm trying to convey is that while there are risks, I think they're not as bad as people are saying, even some other researchers. Everyone knows the risks, but some things simply aren't realistic.
Viewing a single comment thread. View all comments