Submitted by trafalgar28 t3_106ahcr in MachineLearning
I have been working on a project with GPT-3 API for almost a month now. The only drawback of GPT-3 is that the prompt you can send to the model is capped at 4,000 tokens - where a token is roughly equivalent to ¾ of a word. Due to this, providing a large context to GPT-3 is quite difficult.
Is there any way to resolve this issue?
Bulky_Highlight_3352 t1_j3fqhij wrote
There are tools to work around this limitation such as LangChain with its support of summarization of previous context https://github.com/hwchase17/langchain