Accomplished_Diver86 t1_izuefw6 wrote
Disagree. I know what your point is and I would agree weren’t it for the argument that AGI would need less resources than ASI.
So we stumble upon AGI. With whatever resources it needs to get to AGI it will need a lot more of that to get to ASI. There are real world implications to that (upgrading hardware etc.)
So AGI would first have to get better Hatdware to get better and need more Hardware to get even better than that. All this takes a lot of time.
Of course if the hardware is there and the AGI is basically just very poorly optimised sure, it could optimise itself a bit and use the now free ressources of hardware. I just think thats not enough.
An ASI will not just need to upgrade from a 3090 to 4090. It probably needs so much hardware it will take weeks if not months / years
For all intents and purposes, it will first need to invent new hardware to even get enough hardware to get smarter. And not just one generation of new hardware but many
blueSGL t1_izuf5tm wrote
> Of course if the hardware is there and the AGI is basically just very poorly optimised sure, it could optimise itself a bit and use the now free ressources of hardware. I just think thats not enough.
what if the 'hard problem of consiousness' is not really that hard, there is a trick to it, no one has found it yet, and an AGI realizes what that is. e.g. intelligence is brute forced by method X and yet method Y runs so much cleaner with less overhead and better results. something akin to targeted sparsifcation of neural nets where a load of weights can be removed and yet the outputs barely change.
(look at all the tricks that were discovered to get stable diffusion running on a shoebox in comparison to when it was first released)
Geneocrat t1_izvxa40 wrote
Great point. AI will be doing a lot more with a lot less.
There have to be so many inefficiencies with the design of CNNs and reinforcement.
Clearly you don’t need the totality of human knowledge to be as smart as a above average 20 year old, but that’s what we’ve been using.
ChatGPT is like a well mannered college student who’s really fast at using google, but it obviously took millions of training hours.
Humans are pretty smart with limited exposure to knowledge and just thousands of hours. When ChatGPT makes it’s own AI, it’s going to be bananas.
Accomplished_Diver86 t1_izuhyen wrote
Yeah sure. I agree that was the point. Narrow AI could potentially see what it takes to make AGI which would in turn free up resources.
All I am saying that it would take a ton of new ressources to make the AGI into an ASI.
__ingeniare__ OP t1_izug0hr wrote
I don't think you fully understood my point, it is slightly different from the regular "self-improving AGI -> ASI in short time" argument. What I meant was that, as the narrow intelligence that we have built is gradually combined into a multi-modal large-scale general AI, it will be superhuman from the get go. There won't be a period in which we have AGI and simply wait for better hardware to scale to ASI. We will build narrow superintelligence from the beginning, and gradually expand its range of domains until it covers everything humans can do. At that point, we have both AGI and ASI.
Accomplished_Diver86 t1_izuho09 wrote
Yeah well that I just don’t agree with
__ingeniare__ OP t1_izui5xn wrote
Which part?
Accomplished_Diver86 t1_izuikfz wrote
As you have said (I will paraphrase) „We will built dumb ASI and expand its ranges of domain“
My argument is that ASI does inherently have greater ranges of domain than AGI.
So if we expand it there will be a point where the ranges of domain are human like (AGI) but not ASI like.
TLDR: You can not build a narrow ASI and scale it. That’s not an ASI but a narrow AI
__ingeniare__ OP t1_izujqe0 wrote
That is more a matter of word choice, the concept is the same. I called it narrow superintelligence because the fact that it is better than humans is important to the argument.
Let's call it narrow AI then - by the time it covers all the domains of human knowledge, it will also be significantly better than humans in all of those domains. Hence, when we get AGI, we also get ASI.
Accomplished_Diver86 t1_izujzaq wrote
Sure but you are still forgetting the first part of the picture. Expansion means movement. You will have a time where it is good but not good in all domains. This will resemble what we call AGI.
Humans are good just not in all domains and ranges you wish we were. It’s the same thing with AI.
TLDR: Yes but no
__ingeniare__ OP t1_izulilb wrote
Ah I see what you mean, I guess it depends on how strictly you enforce the generality of AGI.
Viewing a single comment thread. View all comments