Submitted by OmThepla t3_1172p6w in Futurology
Talik1978 t1_j9ainey wrote
Reply to comment by PO0tyTng in I BROKE THE ORIGINAL CHATGPT! (not the Bing one!) by OmThepla
>My question is — if it actually had the means do these things, what would it take to switch from hypothetical to reality? One rogue programmer getting rid of an IF statement or row of training data?
With self learning AI, it's entirely possible that the program learns to make that change itself.
AI is good at.doing what we ask it to, but that is not the same as doing what we want it to. As an example, programmers trained an AI to control a cleaning bot. It was trained on a reward model, where it received positive reinforcement whenever it couldn't detect a mess, and negative reinforcement whenever it could.
What did it learn to do? It covered its cameras with a cleaning bucket. Easy, efficient, and now it is constantly being rewarded, as it cannot detect any messes.
Viewing a single comment thread. View all comments