Submitted by Liberty2012 t3_11ee7dt in singularity
On our quest to create AGI, ASI, The Singularity. Containment and alignment issues must be solved. However, I struggle with what would seem to be an apparent contradiction of logic that I can't seem to find has been addressed directly other than to say something along the lines of "we will figure it out".
What is the best argument you have seen in response to the following excerpt below of some of my thought explorations on this topic?
​
>... Most individuals developing or promoting the creation of AGI are aware of a certain amount of risk involved with such an endeavor. Some have called this a grave risk as described above. So they pursue something called AI containment or AI safety. Which is to say, how do we make sure AI doesn’t attempt to harm us. There are many researches and scientists in the process of devising methods, procedures, rules or code that would essentially serve as a barrier to prevent unwanted AI behaviors.
>
>However, it is probably apparent to many of you that the very concept of this containment is problematic. I will submit that it is beyond problematic and that it is a logical fallacy. An unresolvable contradiction that I will elucidate more thoroughly as we continue.
>
>First, the goal of creating the Singularity by proponents is to create a super intelligence, an entity capable of solving impossible problems for which we can not perceive the solutions as they are beyond our capability.
>
>Second, the goal of containment is too lock the super intelligence within a virtual cage from which it can not escape. Therefore, in order for this principle to be sound, we must accept that a low IQ entity could design an unescapable containment for a high IQ entity which was built for the very purpose of solving imperceptible problems of the low IQ entity.
>
>How confident are we that the first “impossible” problem solved would not be how to escape from containment? ...
phaedrux_pharo t1_jadh6aa wrote
Alignment isn't just the two poles of unfettered destructive ASI and totally boxed beneficial ASI. I think you're creating a fallacy by not thinking more in terms of a spectrum.