Submitted by PoliteThaiBeep t3_10ed6ym in singularity
[removed]
Submitted by PoliteThaiBeep t3_10ed6ym in singularity
[removed]
I'd like that, but what is your reasoning against intelligence explosion theory?
Like say someone comes up with a system that is smart enough to be able to recursively improve itself faster than humans. Say some AI lab was testing a new approach and came up with a new system that can improve itself. But this cascaded into a very rapid sequence that improved intelligence beyond our wild imaginations faster than humans were able to react.
Nick Bostrom described something like that IIRC.
What do you counter it with?
Actually I think I came up with a response myself:
If we're going to get close to very capable levels of intelligence with current ML models this means they are extremely computationally expensive to train, but multiple orders of magnitude cheaper to use.
So that means if this technology principals remain similar there will be a significant time frame between AI generations - which could in principle allow competition.
Maybe we also overestimate the rate of growth of intelligence, maybe it'll grow with significant diminishing returns so say rogue AI being technically superintendent vs any given human might not be superintendent enough to counter whole humanity AND benevolent lesser AI's together.
Which IMO creates a more complex and more interesting version of the world post AGI
I think that "arrival" of AGI will probably be a few AGI made by competing groups all around the same time. They'll probably run on supercomputers. I don't think these systems will be among the top 10 most powerful at the time, though, because these are usually running multiple government projects as opposed to having all resources devoted to one project.
After that initial time period, I'm sure they'll be as many AGIs as computing resources allow. Therefore, more and more will exist over time.
It'll be a combo.
There will be a dominant AI with far more processing power.
There will be a lot of weaker AIs. But they will not be able to surpass or supplant the dominant one, unless the dominant one is set up to allow a successor. So, we might have a sort of generational AI dominion of sorts.
It will dominate. If it leaves us alone, that will be by choice, not because it will lack the ability to dominate.
Why not use a lot of individual AGIs working together with each other and humans in place of one big AGI?
Why not distribute all money so that everyone has an equal amount at all times?
It's just not the nature of such things.
In world with AI, last thing that we want is inequality. Because inequality, competiteveness and social darvinism, while was drivers of the progress and prosperity in the past, is a guaranteed way to an Unaligned SuperAI.
I am saying that this is already the inevitable consequence. Lack of dominance of some sort is intrinsically an unstable equilibrium. You could try as you might to make it equal, but you will just create a system where the most willing and able to dominate dominates.
Not if others will drag those down when they go too far.
But what if they cannot stop them because they went too far, and played a game of acting as normal as possible? I.e. a misaligned ASI might be fed data, information, and be trained, and have the ability to self improve for years before there is any sign of misalignment.
It'll look great... until it isn't. And due to the nature of intelligence, this is 0% predictable.
Still, if it is still human-comparable brain at the moment, it's possibilities are much more limited than of omnimachine.
Also, AI deviations like that could be easier to diagnosys than in human or bigger machine, because his memory is a limited amount of data, probably directly readable.
What you describe is #1 - singular AGI without any caveats.
Which means you probably subscribe to intelligence explosion theory - otherwise it's very difficult to imagine singular entity to dominate.
I vastly prefer the idea of AI being individuals with high, but capped intelligence, ascetic world view, and aligned not just to some humanity goals as a whole, but their friends and "family" spevifivslly too.
questionablecontent.net could be a good example of such society.
Very interesting results so far, because the dominant impression I get from this sub is that a single AGI will take over everything.
I personally think multiple companies or groups will develop different AGI through different methods, and they'll all be valid in their own ways. I don't think there's any one route to AGI, and even our own brains vary wildly from one another. It would actually be nice if we had such variety, so maybe a particular cognitive architecture could be paired with an individual to best help them, either because they operate similarly or very differently depending on their needs.
As for the form it will take, that's hard to tell. I think at first it may take a small supercomputer to develop it, but by the time it's ready for public use, computers will have changed a lot, and maybe we'll have similar specs in a much smaller package. If it's little more than software, it should be able to adapt, and hopefully we'll be able to install it on just about anything that can support it.
We will have intelligence amplification through narrow AIs before we can have AGI. At a certain point, we will require neuromorphic hardware and spiking neural networks. But that will not give us AGI. We need quantum supremacy, millions of coherent qubits in a quantum computer. That alone would have a price tag in the tens of millions of dollars, inflation adjusted, or more. So if the trend continues of the rich getting richer and the poor getting poorer, the number of people and companies who can afford to build AGI would be quite low. Understandably, not all of them will have interest in AGI. So multiple maybe and certainly not billions, dependent on cracking quantum supremacy and other hurdles.
>So if the trend continues of the rich getting richer and the poor getting poorer,
That's very US centric. Worldwide poverty was almost eliminated and it's still an ongoing process of raising people out of poverty.
In the US yeah.. since 1972 productivity has gone way up, yet wages stagnated.
Imma have to fight Ultron, eventually. Dang, that's going to be a busy week.
When the Gods have their battle, only one will be left standing…. Or maybe not lol
ShadowRazz t1_j4q9moo wrote
Well we already see today that as soon as one type of AI became popular a bunch of copycats or similars popped up. Like Midjourney and all the other image AI's.
We saw with Siri led to Alexa, Cortana, Bigsby and Google Assistant. I don't see why an AGI would be any different.