Submitted by MysteryInc152 t3_115x1it in MachineLearning
yoshiwaan t1_j96uxg7 wrote
Reply to comment by blueSGL in [D] Toolformer implementation using only few-shot prompting by MysteryInc152
Really? As in the order of operations is: token parsing => Toolformer => LLM?
Genuine question, is the text/token parsing for queries to an LLM (eg chatgpt) performed separately and beforehand to the actual LLM being leveraged, or is the text/token parsing a part of the LLM? I figured it was the latter and you couldn’t just insert a tool there
blueSGL t1_j96yan4 wrote
sorry from what I understand it goes something like this:
LLM processes prompt, formats output as per the initial few shot demos.
This output is an intermediary step in plain text including keywords that then get picked up by Toolformer
Toolformer goes off does the search things and returns predefined chunks formatted from the search results
The prompt is then stuffed with those chunks and asked the question again with the added retrieved search context
(and I'm sure there is more pixie dust sprinkled in somewhere. )
Viewing a single comment thread. View all comments