Submitted by terserterseness t3_10fxryj in MachineLearning
MysteryInc152 t1_j50pw6e wrote
Reply to comment by IntelArtiGen in [D] Inner workings of the chatgpt memory by terserterseness
With embeddings, it should theoritically not have a hard limit at all. But experiments here suggest a sliding context window of 8096
https://mobile.twitter.com/goodside/status/1598874674204618753?t=70_OKsoGYAx8MY38ydXMAA&s=19
Daos-Lies t1_j50vdq9 wrote
That is indeed fair enough.
Big fan of the concept of screaming at it until it forgets ;)
And I suppose it is very possible that as part of my 'v long conversations with it' if the topic of the conversation repeated at any stage, which I'm sure they would have done at points, then that could have fooled me into thinking it was remembering things from right at the start.
MysteryInc152 t1_j50ym7g wrote
There's a repo that actually uses embeddings for long term conversations you can try out.
Viewing a single comment thread. View all comments