Submitted by PoliteThaiBeep t3_10ed6ym in singularity
PoliteThaiBeep OP t1_j4r2sjc wrote
Reply to comment by PoliteThaiBeep in Singular AGI? Multiple AGI's? billions AGI's? by PoliteThaiBeep
Actually I think I came up with a response myself:
If we're going to get close to very capable levels of intelligence with current ML models this means they are extremely computationally expensive to train, but multiple orders of magnitude cheaper to use.
So that means if this technology principals remain similar there will be a significant time frame between AI generations - which could in principle allow competition.
Maybe we also overestimate the rate of growth of intelligence, maybe it'll grow with significant diminishing returns so say rogue AI being technically superintendent vs any given human might not be superintendent enough to counter whole humanity AND benevolent lesser AI's together.
Which IMO creates a more complex and more interesting version of the world post AGI
Viewing a single comment thread. View all comments