Yomiel94 t1_j4xzlr9 wrote
Reply to comment by gaudiocomplex in OpenAI's CEO Sam Altman won't tell you when they reach AGI, and they're closer than he wants to let on: A procrastinator's deep dive by Magicdinmyasshole
This seems like a stretch. GPT might be the most general form of artificial intelligence we’ve seen, but it’s still not an agent, and it’s still not cognitively flexible enough to really be general on a human level.
And just scaling up the existing model probably won’t get us there. Another large conceptual advancement that can give it something like executive function and tiered memory seems like a necessary precondition. Is there any indication at this point that such a breakthrough has been made?
[deleted] t1_j4yshql wrote
I'm being naive here, but the way ChatGPT has some type of local/temporary memory within each of the 'tabs' is in some ways its memories...
If there was a way for those 'memories' to be grouped and have a type of soft recollection of each of them, I imagine that would be a pathway to a full agent -- think, perhaps you do >50% of your coding work through GPT directly, and the Agent can see the rest of the work you are doing.
It sees your calendar.
It knows you have done x lines of code on y project and it knows exactly how close you are to completion (based on requirements outlined in your Outlook).
I think it's almost trivial (in the grand scheme) to be hooking ChatGPT into several different programs and achieve a fairly limited 'consciousness' -- particularly if we are simply defining 'consciousness' as intelligence * ability to plan ahead.
Basically it has intelligence *almost* covered; its ability to plan ahead is dependent on calendars in the first instance.
Further on, I believe it will need to have access to all spoken word and experience, but that is just too creepy too soon I think. Otherwise how else will it have sufficient data to be an 'Agent'?
theonlybutler t1_j4zd3pg wrote
Yeah I agree, I think the key thing would be the ability of it to fact check itself. Discern whether it's statement is implied to be factual or not (probably a spectrum) and fact check it. If it could this, it'd be a game changer.
Bierculles t1_j4zj7h1 wrote
It's a proto AGI, an AI that can communicate on a human level, it is still far away from beeing able to do everything a human can, i think at least, maybe i'm wrong.
Viewing a single comment thread. View all comments