Viewing a single comment thread. View all comments

ShadowRazz t1_j4q9moo wrote

Well we already see today that as soon as one type of AI became popular a bunch of copycats or similars popped up. Like Midjourney and all the other image AI's.

We saw with Siri led to Alexa, Cortana, Bigsby and Google Assistant. I don't see why an AGI would be any different.

5

PoliteThaiBeep OP t1_j4qwiat wrote

I'd like that, but what is your reasoning against intelligence explosion theory?

Like say someone comes up with a system that is smart enough to be able to recursively improve itself faster than humans. Say some AI lab was testing a new approach and came up with a new system that can improve itself. But this cascaded into a very rapid sequence that improved intelligence beyond our wild imaginations faster than humans were able to react.

Nick Bostrom described something like that IIRC.

What do you counter it with?

1

PoliteThaiBeep OP t1_j4r2sjc wrote

Actually I think I came up with a response myself:

If we're going to get close to very capable levels of intelligence with current ML models this means they are extremely computationally expensive to train, but multiple orders of magnitude cheaper to use.

So that means if this technology principals remain similar there will be a significant time frame between AI generations - which could in principle allow competition.

Maybe we also overestimate the rate of growth of intelligence, maybe it'll grow with significant diminishing returns so say rogue AI being technically superintendent vs any given human might not be superintendent enough to counter whole humanity AND benevolent lesser AI's together.

Which IMO creates a more complex and more interesting version of the world post AGI

1