Viewing a single comment thread. View all comments

[deleted] t1_j4yshql wrote

I'm being naive here, but the way ChatGPT has some type of local/temporary memory within each of the 'tabs' is in some ways its memories...

If there was a way for those 'memories' to be grouped and have a type of soft recollection of each of them, I imagine that would be a pathway to a full agent -- think, perhaps you do >50% of your coding work through GPT directly, and the Agent can see the rest of the work you are doing.

It sees your calendar.

It knows you have done x lines of code on y project and it knows exactly how close you are to completion (based on requirements outlined in your Outlook).

I think it's almost trivial (in the grand scheme) to be hooking ChatGPT into several different programs and achieve a fairly limited 'consciousness' -- particularly if we are simply defining 'consciousness' as intelligence * ability to plan ahead.

Basically it has intelligence *almost* covered; its ability to plan ahead is dependent on calendars in the first instance.

Further on, I believe it will need to have access to all spoken word and experience, but that is just too creepy too soon I think. Otherwise how else will it have sufficient data to be an 'Agent'?

5