[deleted] t1_j4yshql wrote
Reply to comment by Yomiel94 in OpenAI's CEO Sam Altman won't tell you when they reach AGI, and they're closer than he wants to let on: A procrastinator's deep dive by Magicdinmyasshole
I'm being naive here, but the way ChatGPT has some type of local/temporary memory within each of the 'tabs' is in some ways its memories...
If there was a way for those 'memories' to be grouped and have a type of soft recollection of each of them, I imagine that would be a pathway to a full agent -- think, perhaps you do >50% of your coding work through GPT directly, and the Agent can see the rest of the work you are doing.
It sees your calendar.
It knows you have done x lines of code on y project and it knows exactly how close you are to completion (based on requirements outlined in your Outlook).
I think it's almost trivial (in the grand scheme) to be hooking ChatGPT into several different programs and achieve a fairly limited 'consciousness' -- particularly if we are simply defining 'consciousness' as intelligence * ability to plan ahead.
Basically it has intelligence *almost* covered; its ability to plan ahead is dependent on calendars in the first instance.
Further on, I believe it will need to have access to all spoken word and experience, but that is just too creepy too soon I think. Otherwise how else will it have sufficient data to be an 'Agent'?
Viewing a single comment thread. View all comments