Submitted by PoliteThaiBeep t3_10ed6ym in singularity
PoliteThaiBeep OP t1_j4qwiat wrote
Reply to comment by ShadowRazz in Singular AGI? Multiple AGI's? billions AGI's? by PoliteThaiBeep
I'd like that, but what is your reasoning against intelligence explosion theory?
Like say someone comes up with a system that is smart enough to be able to recursively improve itself faster than humans. Say some AI lab was testing a new approach and came up with a new system that can improve itself. But this cascaded into a very rapid sequence that improved intelligence beyond our wild imaginations faster than humans were able to react.
Nick Bostrom described something like that IIRC.
What do you counter it with?
PoliteThaiBeep OP t1_j4r2sjc wrote
Actually I think I came up with a response myself:
If we're going to get close to very capable levels of intelligence with current ML models this means they are extremely computationally expensive to train, but multiple orders of magnitude cheaper to use.
So that means if this technology principals remain similar there will be a significant time frame between AI generations - which could in principle allow competition.
Maybe we also overestimate the rate of growth of intelligence, maybe it'll grow with significant diminishing returns so say rogue AI being technically superintendent vs any given human might not be superintendent enough to counter whole humanity AND benevolent lesser AI's together.
Which IMO creates a more complex and more interesting version of the world post AGI
Viewing a single comment thread. View all comments