Viewing a single comment thread. View all comments

PO0tyTng t1_j9aa9ks wrote

That is a great point. the context explains why.

Although it’s a hypothetical/thought exercise, the point is that it was indeed able to come up with this stuff.

My question is β€” if it actually had the means do these things, what would it take to switch from hypothetical to reality? One rogue programmer getting rid of an IF statement or row of training data?

All I can say is I hope automated weapon systems never have a chat bot user interface.

3

Talik1978 t1_j9ainey wrote

>My question is β€” if it actually had the means do these things, what would it take to switch from hypothetical to reality? One rogue programmer getting rid of an IF statement or row of training data?

With self learning AI, it's entirely possible that the program learns to make that change itself.

AI is good at.doing what we ask it to, but that is not the same as doing what we want it to. As an example, programmers trained an AI to control a cleaning bot. It was trained on a reward model, where it received positive reinforcement whenever it couldn't detect a mess, and negative reinforcement whenever it could.

What did it learn to do? It covered its cameras with a cleaning bucket. Easy, efficient, and now it is constantly being rewarded, as it cannot detect any messes.

1