Submitted by the_alex197 t3_118kd9l in singularity
Let's start with 2 assumptions:
- AGI is inevitable - I think most people here would agree
- We cannot control AGI - Now, perhaps we could control AGI but the fact that we don't know for sure weather or not we even can combined with the enormous potential negative consequences if we cannot makes me think we need a plan B in case both of these assumptions are true.
What is the solution?
Well, what is the problem? The problem is that we have no hope of controlling AGI. But why? Because it would be many orders of magnitude more intelligent than us. So the solution? Simple. Increase our own intelligence so we remain more intelligent than the AI.
"But we could never hope to create a meat brain that could compete with an AGI!"
It always baffles me how conveniently narrow-minded people become when they discuss transhumanism. Obviously a digital format comes with many benefits. Perhaps we could digitize a human brain and then uplift it into superintelligence in a digital environment. Perhaps something else, I don't know, get creative! The point is, only superintelligent humans have any hope of controlling a superintelligent AI.
Only superintelligent humans have any hope of controlling a superintelligent AI.
[deleted] t1_j9hu1go wrote
Because incomprehensibly smart humans are so much safer than AI.