Comments

You must log in or register to comment.

ShadowRazz t1_j4q9moo wrote

Well we already see today that as soon as one type of AI became popular a bunch of copycats or similars popped up. Like Midjourney and all the other image AI's.

We saw with Siri led to Alexa, Cortana, Bigsby and Google Assistant. I don't see why an AGI would be any different.

5

PoliteThaiBeep OP t1_j4qwiat wrote

I'd like that, but what is your reasoning against intelligence explosion theory?

Like say someone comes up with a system that is smart enough to be able to recursively improve itself faster than humans. Say some AI lab was testing a new approach and came up with a new system that can improve itself. But this cascaded into a very rapid sequence that improved intelligence beyond our wild imaginations faster than humans were able to react.

Nick Bostrom described something like that IIRC.

What do you counter it with?

1

PoliteThaiBeep OP t1_j4r2sjc wrote

Actually I think I came up with a response myself:

If we're going to get close to very capable levels of intelligence with current ML models this means they are extremely computationally expensive to train, but multiple orders of magnitude cheaper to use.

So that means if this technology principals remain similar there will be a significant time frame between AI generations - which could in principle allow competition.

Maybe we also overestimate the rate of growth of intelligence, maybe it'll grow with significant diminishing returns so say rogue AI being technically superintendent vs any given human might not be superintendent enough to counter whole humanity AND benevolent lesser AI's together.

Which IMO creates a more complex and more interesting version of the world post AGI

1

phriot t1_j4qkxa9 wrote

I think that "arrival" of AGI will probably be a few AGI made by competing groups all around the same time. They'll probably run on supercomputers. I don't think these systems will be among the top 10 most powerful at the time, though, because these are usually running multiple government projects as opposed to having all resources devoted to one project.

After that initial time period, I'm sure they'll be as many AGIs as computing resources allow. Therefore, more and more will exist over time.

5

OldWorldRevival t1_j4qh8to wrote

It'll be a combo.

There will be a dominant AI with far more processing power.

There will be a lot of weaker AIs. But they will not be able to surpass or supplant the dominant one, unless the dominant one is set up to allow a successor. So, we might have a sort of generational AI dominion of sorts.

It will dominate. If it leaves us alone, that will be by choice, not because it will lack the ability to dominate.

3

Baturinsky t1_j4qm5bv wrote

Why not use a lot of individual AGIs working together with each other and humans in place of one big AGI?

1

OldWorldRevival t1_j4qows7 wrote

Why not distribute all money so that everyone has an equal amount at all times?

It's just not the nature of such things.

1

Baturinsky t1_j4qryvy wrote

In world with AI, last thing that we want is inequality. Because inequality, competiteveness and social darvinism, while was drivers of the progress and prosperity in the past, is a guaranteed way to an Unaligned SuperAI.

1

OldWorldRevival t1_j4qxfm1 wrote

I am saying that this is already the inevitable consequence. Lack of dominance of some sort is intrinsically an unstable equilibrium. You could try as you might to make it equal, but you will just create a system where the most willing and able to dominate dominates.

1

Baturinsky t1_j4qzpye wrote

Not if others will drag those down when they go too far.

1

OldWorldRevival t1_j4r1myr wrote

But what if they cannot stop them because they went too far, and played a game of acting as normal as possible? I.e. a misaligned ASI might be fed data, information, and be trained, and have the ability to self improve for years before there is any sign of misalignment.

It'll look great... until it isn't. And due to the nature of intelligence, this is 0% predictable.

1

Baturinsky t1_j4r6gji wrote

Still, if it is still human-comparable brain at the moment, it's possibilities are much more limited than of omnimachine.

Also, AI deviations like that could be easier to diagnosys than in human or bigger machine, because his memory is a limited amount of data, probably directly readable.

1

PoliteThaiBeep OP t1_j4qu7m5 wrote

What you describe is #1 - singular AGI without any caveats.

Which means you probably subscribe to intelligence explosion theory - otherwise it's very difficult to imagine singular entity to dominate.

1

Baturinsky t1_j4qlrur wrote

I vastly prefer the idea of AI being individuals with high, but capped intelligence, ascetic world view, and aligned not just to some humanity goals as a whole, but their friends and "family" spevifivslly too.
questionablecontent.net could be a good example of such society.

2

AsheyDS t1_j4qw68h wrote

Very interesting results so far, because the dominant impression I get from this sub is that a single AGI will take over everything.

I personally think multiple companies or groups will develop different AGI through different methods, and they'll all be valid in their own ways. I don't think there's any one route to AGI, and even our own brains vary wildly from one another. It would actually be nice if we had such variety, so maybe a particular cognitive architecture could be paired with an individual to best help them, either because they operate similarly or very differently depending on their needs.

As for the form it will take, that's hard to tell. I think at first it may take a small supercomputer to develop it, but by the time it's ready for public use, computers will have changed a lot, and maybe we'll have similar specs in a much smaller package. If it's little more than software, it should be able to adapt, and hopefully we'll be able to install it on just about anything that can support it.

2

No_Ninja3309_NoNoYes t1_j4rfnhr wrote

We will have intelligence amplification through narrow AIs before we can have AGI. At a certain point, we will require neuromorphic hardware and spiking neural networks. But that will not give us AGI. We need quantum supremacy, millions of coherent qubits in a quantum computer. That alone would have a price tag in the tens of millions of dollars, inflation adjusted, or more. So if the trend continues of the rich getting richer and the poor getting poorer, the number of people and companies who can afford to build AGI would be quite low. Understandably, not all of them will have interest in AGI. So multiple maybe and certainly not billions, dependent on cracking quantum supremacy and other hurdles.

2

PoliteThaiBeep OP t1_j4smayv wrote

>So if the trend continues of the rich getting richer and the poor getting poorer,

That's very US centric. Worldwide poverty was almost eliminated and it's still an ongoing process of raising people out of poverty.

In the US yeah.. since 1972 productivity has gone way up, yet wages stagnated.

1

[deleted] t1_j4rnx2y wrote

Imma have to fight Ultron, eventually. Dang, that's going to be a busy week.

2

Professional-Song216 t1_j4t1edd wrote

When the Gods have their battle, only one will be left standing…. Or maybe not lol

1