Submitted by Pointline t3_123z08q in singularity
flexaplext t1_jdxjy0l wrote
Reply to comment by Pointline in Is AI alignment possible or should we focus on AI containment? by Pointline
It depends entirely on how seriously the government / AI company takes the threat of a strong AGI. To whether it will be created safely or not.
There is then the notion that we will need to be able to actually detect if it's reached strong AGI, or a hypothesis that it may have and may deceive us. So, whichever way, containment would be necessary if we consider it a very serious existential threat.
There are different levels of containment. Each further one is more and more restrictive but more and more safe. The challenge would likely come in working out how many restrictions you could lift in order to open up more functionality whilst also keeping it contained and completely safe.
We'll see when we get there how much real legislation and safety is enforced. Humans tend to, unfortunately, be rather reactive rather than proactive, which gives me great concern. An AI model developed between now and AGI may be used to enact something incredibly horrific though, which may then force these extreme safety measures. That's usually what it will take to actually make governments sit up properly and notice.
Viewing a single comment thread. View all comments