TFenrir t1_irbgq74 wrote
Reply to comment by ihateshadylandlords in “Extrapolation of this model into the future leads to short AI timelines: ~75% chance of AGI by 2032” by Dr_Singularity
What do you mean by proof of concept? These are real models, and real insights that we gain. Those insights are applied sometimes almost immediately on models the public have access to.
Do you mean, in this 2032 prediction, are they talking about AGI being something that's available to the public or only available to people behind closed doors? It would be the latter, because the nature of this prediction is that it would be emerging in the bleeding edge super computers that Google is using in their research.
Honestly I'm not even sure how AGI could be turned into a product, it would just be too... Disruptive? The nature of how regular people would interact with it is a big question mark to me.
ihateshadylandlords t1_irbhzbm wrote
By proof of concept, I meant that it was something that they’ve disclosed they have, but aren’t making it publicly available for whatever reason.
If the AGI model can be applied to programs that the public can use (like GPT3), then that would be great.
TFenrir t1_irbos42 wrote
> If the AGI model can be applied to programs that the public can use (like GPT3), then that would be great.
AGI just wouldn't be possible for quite a while after invention for publicly available models though. I don't even really call GPT publicly available - you have API access but you don't actually have access to the model itself. We do have other publicly available models though; stable diffusion, gpt-j, Roberta, etc.
Regardless, think of it this way... Imagine a group of scientists standing around a monitor, attached to a private, heavily secured internal network, which utilizes a distributed super computer specifically just to run inference on a model that they just finished training in a very secure facility. At this point the models before have been so powerful, that containment is a legitimate concern.
They are there to evaluate whether or not this model constitutes an AGI, if it has anything resembling consciousness, if it's an existential threat to the species.
They're not going to just... Release that model into the wild. They're not even going to give the public access, or awareness of this model in any way shape or form, for a while.
That doesn't even get into the hardware requirements that would probably exist for this first model.
ihateshadylandlords t1_irbth00 wrote
I’m not tech savvy at all, I didn’t know there was a difference between API and GPT3. But yeah, that’s why I’m not as hyped as a lot of people on here whenever AGI is created. It’s not like we’ll be able to use it (unless someone creates an open source version of it).
iNstein t1_irceopl wrote
API stands for Application Programming Interface. It is basically a series of commands that programmers can use to access/communicate with another program like GPT3. A kind of specialised instruction set for your program like gpt3.
Having an API connection to something like gpt3 is very similar to having gpt3 running on your own computer in a functional sense. It just means that you do not have to have the high performance hardware to run it on. It is the best option for something like gpt3 to be able to get as many ordinary people using it without us all going out and buying extremely expensive hardware.
ihateshadylandlords t1_ircgn2e wrote
Good to know, thanks for the explanation.
Viewing a single comment thread. View all comments