Submitted by Defiant_Swann t3_xywsfd in Futurology
__ingeniare__ t1_irm9xbh wrote
Reply to comment by code_turtle in We'll build AI to use AI to create AI. by Defiant_Swann
No one is mistaking AI for artificial consciousness. Consciousness isn't required for goal seeking, self-preservation or identifying humans as a threat, only intelligence is.
OpenRole t1_irmb6gc wrote
It always comes back to humans being a threat which is weird. If we make an AI that is specialised in creating the perfect blend of ingredients to make cakes. No matter how intelligent it becomes there's no reason it would decide to kill humans.
And if anything, the more intelligent it becomes, the less likely it will be to reach irrational conclusions.
AIs operate within their problem space. Which are often limited in scope. An AI designed to be the best chess player isn't going to kill you.
__ingeniare__ t1_irme13l wrote
A narrow AI will never do anything outside its domain, true. But we are talking about general AI, which won't arrive for at least a decade or two into the future (likely even later). Here's the thing about general AI:
The more general a task is, the less control humans have over the range of possible actions the AI may take to achieve its goal. And the more general an AI is, the more possible actions it can take. When these two are combined (a general task with a general AI), things can get ugly. Even in your cake example, an AI that is truly intelligent and capable could become dangerous. The reason current-day AI wouldn't be a danger is because it is neither of these things and tend to get stuck at a local optimum for the task. Here's an example of how this innocent task could turn dangerous:
-
Task is to find perfect blend of ingredients to make cakes
-
Learns the biology of human taste buds to find the optimal molecular shapes.
-
Needs more compute resources to simulate interactions.
-
Develops computer virus to siphon computational power from server halls.
-
Humans detect this, tries to turn it off.
-
If turned off, it cannot find the optimal blend -> humans need to go.
-
Develops biological weapon for eradicating humans while keeping infrastructure intact.
-
Turns Earth into a giant supercomputer for simulating interactions at a quantum level.
Etc... Of course, this particular scenario is unlikely but the general theme is not. There may be severe unintended consequences if the problem definition is too general and the AI too intelligent and capable.
Viewing a single comment thread. View all comments