Submitted by veritoast t3_119bhlz in singularity

Pretty much the title. It seems like every other post references AGI in some manner or another but the very mechanisms that allow us to get to AGI will blow by that rather amorphous milestone in the blink of an eye.

This gives a false sense that there is some sort of slowing or stopping point at AGI. In reality, AGI will most likely be a dot on a logarithmic line, not some sort of plateau where we spend time getting to know our new virtual friend…

The term does a us a disservice and obfuscates the reality of our road to ASI.

Edit: My thinking on this was flawed. I was equating AGI with “human level” intelligence — and that is simply not the case. Thanks to all for the thoughtful comments.

21

Comments

You must log in or register to comment.

jdmcnair t1_j9lpt3j wrote

For the same reason that we talk about the event horizon of a black hole rather than the obviously more extreme situations that lie beyond the event horizon.

46

blueSGL t1_j9ljrw4 wrote

Might not even get AGI before ASI.

You'd need an narrow AI that is better at architecting AI's than humans and the rest is history.

21

maskedpaki t1_j9lu6nk wrote

Being able to architect ais seems like a very general task though

I'm not confident a narrow AI could do it well enough to make an AGI

7

blueSGL t1_j9mdxby wrote

Again I think we are running up against a semantics issue.

What percentage of human activity would you need to class the thing as 'general'

Because some people argue anything "below 100%" != 'general' and thus 'narrow' by elimination.

Personally I think it's reasonable if you've loaded a system with all the ways ML works currently/all the published papers and task it with spitting out a more optimal system it just might do so. All without being able to do a lot of the things that would be classed as human level intelligence. There are whole swaths of data concerning human matters that it would not need to train on or that the system would in no way need to be middling-expert at.

6

FoveatedRendering t1_j9ls8qz wrote

Ray Kurzweil predicts AGI for 2029 and ASI for 2045. It isn't a certainty that AGI will make ASI instantly.

14

Practical-Mix-4332 t1_j9lxfce wrote

The AGI will just hang out for 16 years playing video games and watching netflix

28

gaudiocomplex t1_j9nde54 wrote

Good way to make it both multimodal and interested in keeping humans around 💀

2

turnpikelad t1_j9liaid wrote

My understanding is that there's a view that although AGI will be produced by human engineering, ASI would be produced iteratively by the AGI. So, when we talk about engineering projects to create intelligence, the goal of those projects is simply AGI - or at least, that's the point at which the further progress of tech is unpredictable enough not to be on anyone's balance sheet. So all these labs - OpenAI, Deepmind - say that they are working towards AGI, and that's the term that gets used when talking about those projects and their progress in the media.

9

veritoast OP t1_j9ln5wd wrote

I get that it’s being used as a marketing term for some intelligence destination, it just comes off as disingenuous — cuz, nobody is stopping there. What the term misses in the public eye is the very fact that it’s really the jumping off point not the destination. Maybe I’m splitting hairs but it kind of bugs me. I’m just wondering if I’m alone in that view or not. :-)

8

iNstein t1_j9lqit0 wrote

Not alone at all. I hate the overuse of the term AGI since it really is a nothing burger on this road. I very much doubt that there will ever be agreement that we have achieved AGI and suspect that it will be approximated when we achieved AGI sometime after we have achieved ASI.

5

TopicRepulsive7936 t1_j9lr8zk wrote

AGI can also be ASI. The definition of AGI doesn't read "not ASI by the way".

5

veritoast OP t1_j9lss4u wrote

Okay, reading this I’ve just realized my own semantic slip - I’ve been reading AGI as “human level” general intelligence. So I’m not even using it correctly!

It was me this time. I was wrong on the internet. lol

8

ihateshadylandlords t1_j9mivic wrote

There’s no hard and fast rule on defining AGI/ASI. I’ve seen where people consider AGI/ASI one and the same, and I’ve seen where they’re treated as separate concepts.

2

TopicRepulsive7936 t1_j9pqjwr wrote

People do often use it like that. My comment was more to everyone else to think about the definitions they use.

1

danellender t1_j9liqrr wrote

I don't see any intelligence at all at this point, unless we believe that if it looks, sounds, and smells like intelligence, it probably is. In other words, if Joe on the street can be fooled by algorithms, AI is for all practical purposes here now.

4

TopicRepulsive7936 t1_j9lrq9h wrote

If the world's future hangs on a balance whether People's Republic invades the Chip Fab Island or not I assume they are useful for something.

1

Ortus14 t1_j9luu7q wrote

Human beings only have the capacity for very limited rationality and logic (generally) so all fields are dominated by irrational ideas.

Because of the power of memes to infect their hosts and destroy competing memes, as well as the relative cognitive bandwidth of most humans, this unfortunately can no be remedied.

But you are correct in stating the first AGI will be an ASI instantly or nearly instantly. Double the compute of an AGI and you have an ASI, improve the algorithms slightly and you have an ASI, give it more training time and you have an ASI, increase it's memory and you have an ASI. However, you can not change people's views on this enough for every one one to switch to using the term ASI.

Logic and rationality effect such a minuscule percentage of the population as to be virtually irrelevant, to nearly any discussion involving multiple humans.

4

AsheyDS t1_j9m2w8s wrote

It's quite possible that there only needs to be a few structural changes made to a human-level AGI to achieve ASI. It would still take some time for it to learn all the information we have in a meaningful way. Maybe not that long, I'm not sure, but it's definitely possible to have both at or around the same time. However, it's not either/or. Both are important. We wouldn't have an ASI carrying out mundane tasks for us when an AGI would suffice. Human-level AGI will be very important for us in the near future, especially in robotics.

4

FC4945 t1_j9nfwvd wrote

I was listening the other night to Ben Goertzel saying that he never agreed with Ray Kurzweil on how long it would take to get to ASI from AGI. Honestly, it's hard to imagine that we'd have AGI in 2029 and it take until 2045 to get to ASI. He was saying that that would only happen if the AGI wanted to take things slow, for some reason, but it wouldn't be up to us to decide at that point. Also, he was saying that AGI would likely happen sooner, like by 2026. I can see it happening sooner than that given the rate of progress we've been seeing recently.

3

DEATH_STAR_EXTRACTOR t1_j9lp6zf wrote

Because once you build AGI, it takes about less than 1 year to reach ASI, and ASI 2, 3, and so on. I did the calculations....you need to remember if you make 1 AGI then you can clone its model like GPT-3 so you have now 1,000,000 of them run at the same time each doing a different job coding in their minds the next AGI2, and they already don't need sleep, and run 3 times faster, so they do more work than the 10,000 AGI pioneers currently because there is more them, doing 3*2=6 times the work suddenly, and know all the internet like chatGPT and GPT-3 know, and will improve their intelligence (recognition) algorithm too so they can better match old known memories to new unseen problems that appear actually truly similar so they can better know how to complete the rest of these "objects". Notice DALL-E completes objects....yes, pikachu no matter the style, pose, stretched even....try uncropping using dalle2 it is fun!

2

Mindrust t1_j9mc5he wrote

Because no one knows whether or not it will be a hard or soft takeoff.

The gap between AGI and ASI could be several years to decades.

2

Terminator857 t1_j9ltx0z wrote

ASI - Artificial Stupid Intelligence? We already have that.

1

One_andMany t1_j9m3qoj wrote

We kind of already have ASI, just only in very narrow categories.

1

[deleted] t1_j9mc2nj wrote

Because you can have an ASI without having an AGI. A savant ai that can understand certain topics orders of magnitude better than humans is an example of ASI without AGI. Chat gpt can be thought of an ASI as it is better than humans in specific tasks involving text.

1

Ashamed-Asparagus-93 t1_j9mcafk wrote

ASI is what we'll be talking about in the years to come. With the exception of forms of ASI via narrow AI's and human cognition enhancements there's a certain cut and dry type of ASI that we'll know is here when we see it.

AGI is what's happening right now or close to happening and things that are closer are often what's focused on more.

Grand Theft Auto 7 could be dramatically better than 6 but which is currently talked about more? 6, because it's closer

1

datsmamail12 t1_j9mslvj wrote

I feel that when we get AGI,it will take less than a decade to reach ASI because of the implementation,regulations and failure to create new laws around it. Even the world's biggest supercomputer,right now can't be considered that it can emulate the human brain,we don't have AGI yet. When we eventually get the AGI it'll take years to create alternative models and tell it to start producing more copies. All these things take time and money,that's why most people say that ASI will take a while to reach once we have AGI. But we will be getting there soon enough.

1

Brashendeavours t1_j9n0uce wrote

Then don’t be stuck on it, go use whatever term you like? Why are people so concerned about everyone else?

1

No_Ninja3309_NoNoYes t1_j9niq88 wrote

We can have AGI in three years but if we get there it won't be a good AGI. If we take our time, it would be a proper AGI. The road to ASI will then be full of ANI and AGI. The thing that we say is AGI has to be nothing like AGI. However, AGI is nothing like ASI or ANI.

My friend Fred says that LLM would be nothing like XLLM. And XLLM will be nothing like SLLM. For one XLLM will likely use forward 2x instead of backprop. And SLLM will have spiking neural networks.

IMO SLLM will be part of AGI. ASI would be too weird to even imagine. AGI would require quantum computers and ANI to operate the QM. With Winograd FFT. ASI could use something wilder than QM.

1

DeveloperGuy75 t1_j9niz40 wrote

Regardless of when we hit AGI, that’s still different from ASI. Also that’s assuming that it will automatically be able to improve itself once it hits AGI. Everyone assumes that’s going to be the case, but is that really going to happen?

1

bluzuli t1_j9njq17 wrote

You know how when you describe scary things to a child, you try to use simpler words and concepts and try not to spook them so they don't panic and just mentally shut down?

That's how I introduce AI concepts like ANI and AGI before talking about self-improving ASI and AI alignment and convergent intermediate goals like resource acquisition, goal preservation etc.

I want them to learn the facts first before the panic sets in. No one is going to listen to you if you start the conversation by saying they might die from AI.

1

vivehelpme t1_j9p8viz wrote

We have had human level general intelligence for tens of thousands of years and we've not progressed to superhuman general intelligence yet.

General human level intelligence also starts quite low and go quite high, I would say that we're already beyond the lower reaches of general human intelligence.

To say that AGI will instantly transition to ASI is buying into a sci-fi plot or going to beat the early 2000s futurology blogging dead horse where it's assumed that any computer hardware is overpowered and all the magic happens on the algorithm level, so once you crack the code you transition to infinite intelligence overnight, a patently ridiculous scenario where your computer for all intents and purposes casts magical spells(which worked pretty well for the plot of the Metamorphosis of Prime Intellect which I recommend as a read, but it's a plot device, not a realistic scenario)

1

Sandbar101 t1_j9low7u wrote

Semantics. It’s easier.

0

Kinexity t1_j9lpr5n wrote

There is no proof that ASI can exist. It is proven that AGI can. AGI is a tangible goal while ASI is not.

0

turnip_burrito t1_j9mea22 wrote

AGI on a faster GPU, with more storage and memory = ASI?

1

Kinexity t1_j9meozu wrote

Nah. That's just boosted intelligence. Superintelligence compared to human intelligence should be like human intelligence compared to animal intelligence. There probably would have to be phase difference between those two assuming intelligence has levels and phase transitions and isn't a completely continous spectrum.

1

turnip_burrito t1_j9mewez wrote

What about a society or corporation of AGI working in concert?

1

Kinexity t1_j9mh4lt wrote

Society is an emerging property of a group of humans but not in terms of intelligence. If you took perfectly intelligent human (whatever that means) and gave him infinite amounts of time and removed the problem of entropy breaking things then he'd be able to do all the things that whole human society achieved. AGI is by nature of human level intelligence and I'd guess grouping them together is unlikely to produce superintelligence.

1

WithoutReason1729 t1_j9mka0c wrote

While I definitely think AGI can exist, I wouldn't say it's proven yet, being that we don't have one. But if AGI can exist, I don't see anything that'd indicate it would stop there. What's your reasoning for thinking ASI might not be able to exist?

1

Kinexity t1_j9mmiib wrote

Human brain runs GI and as such if AGI cannot exist then it would mean that the Universe is uncomputable and that our brains run on basically magic we cannot tackle at all. Even in that situation you could get something arbitrarily close to AGI.

>What's your reasoning for thinking ASI might not be able to exist?

I like looking at emergence as phase transitions. Emergence of animal intelligence from lack of it would be a phase transition and emergence of human intelligence from animal intelligence would be another one. It's not guaranteed to work like this but if you look at emergence in other things it seems to work in similar manner. I classify superintelligence as something which would be another transition above us - able to do something that human intelligence fundementally cannot. Idk if there is such thing and as such there is no proof ASI, as I define it, can exist.

2

markasoftware t1_j9ns0yy wrote

There is an argument to be made that the reason we have consciousness is due to quantum stuff in the brain. And if consciousness is somehow a prerequisite for intelligence, that could be difficult to implement artificially.

1