Submitted by Pointline t3_123z08q in singularity
I know the meme of the Open AI open position for an engineer to pull the plug is the first thing that comes to mind but let’s say we finally have the solution for AI alignment. Would such a strategy work against a powerful enough AI? If an AI becomes ASI, how can we control that which is many times smarter than the smartest human ever lived or the entirety of the human collective? It would be like ants trying to control humans.
SkyeandJett t1_jdx4g9n wrote
AI containment isn't possible. At some point soon after company A. creates AGI and contains it some idiot at company B will get it wrong. We've basically got one shot at this so we better get it right and short of governments nuking the population back to the stone age you can't stop or slow down because again somebody somewhere is going to figure it out. Some moron on 4chan will bootstrap an AI into a recursive self-improvement loop without alignment and we're all fucked anyway. I'm not a doomer but we're near the end of this headlong rush into the future so we better not fuck it up.