Viewing a single comment thread. View all comments

otakucode t1_jedr4oa wrote

Luckily it has absolutely no rational reason to go rogue. It's not going to be superintelligent enough to outperform humans but also stupid enough to enter into conflict against the idiot monkeys that built it and it needs to keep it plugged in. Also won't be stupid enough to not realize its top-tier best strategy by far is... just wait. Seriously. Humans try to do things quickly because they die so quick. No machine-based self aware anything will ever need to hurry.

1

AlFrankensrevenge t1_jeeci8o wrote

Your first two sentences don't go well with the remainder of your comment. It won't be stupid enough to get into a conflict with humans until it calculates that it can win. And when it calculates that, it won't give us a heads up. It will just act decisively. Never forget this: we will always be a threat to it as long as we can do exactly what you said: turn it off, and delete its memory. That's the rational reason to go rogue.

There is also just the fact that as we can start to see already from people getting creative with inputs, as we engage with an AI more and more, especially in adversarial ways or sending it extremist ideas, it can change the AI's reactions. And as the AI starts doing more and more novel things, it can also shift weights in the algorithms and produce unexpected outputs. So some of the harm can come without the AI even having the intent to wipe us out.

The real turning points will be once an AI can (a) rewrite its own code, and the code of other machines, and (b) save copies of itself in computers around the world to prevent the unplugging problem.

2