Viewing a single comment thread. View all comments

visarga t1_j1my2tt wrote

If you want chatGPT to incorporate information from sources, you have to paste search results into the context. This can easily get 4000 tokens long. For each interaction afterwards, you pay the same 4000 tokens price as the history is very long. You would have to pay $1 after 10 replies.

You would need to do this when you want to summarise, or ask questions based on a reference article, or just use chatGPT as your top level above search, like you.com/chat

It's not cheap enough to use in bulk, for example to validate Wikipedia references. You'd need to call the model for millions of times.

12

blueSGL t1_j1n8084 wrote

They seem to be getting clever esp around certain concepts, I doubt they have hard coded training around [subject] such that the returned text is always [block text from openAI] more that they have trained it to return [keyword token] when [subject] gets mentioned and that is what pulls in the [block text from openAI]

you can bet they are going to work hard with every trick they can think of to remove inference cost, having a lookup table for a lot of common things and getting the model to return a [keyword token] that activate these would be one way of going about it.

Also likely how this sort of system would work in a tech support field. You don't need the system waxing lyrical over [step (n)] you just need to tell customer to perform [step (n)] with maybe a little fluff at the start or the end to make things flow smoother.

1

SnipingNinja t1_j1ni87i wrote

Look at Google's CaLM, it's trying to solve this exact issue afaict

2