Submitted by MysteryInc152 t3_115x1it in MachineLearning
MysteryInc152 OP t1_j93saow wrote
In-context learning is overpowered lol. Really is implicit finetuning.
This is the prompt template
>You are an AI assistant with several tools available to you. The tools are the following:
>
>MATH: This allows you to evaluate mathematical expressions using the math.js library.
>
>NOW: This returns the current date and time. You must not pass any arguments to this tool!
>
>SEARCH: This returns a web search result for the given string argument.
>
>DO NOT USE TOOLS WITHIN TOOLS! KEEP ALL TOOL CALLS SEPARATE FROM EACH OTHER!
>
>User: What is 10 times 14?
>
>Assistant: 10 * 14 is [MATH(10 * 14) -> 140] 140.
>
>User: How old is Donald Trump?
>
>Assistant: Donald Trump is [SEARCH(Donald Trump age) -> 76] 76 years old.
>
>User: What day is it?
>
>Assistant: It is [NOW() -> 13/02/2023, 20:01:35] the 13th of February 2023.
>
>User: Has Avatar the way of water been released yet?
>
>Assistant: [SEARCH(Avatar the way of water release date) -> 22.11.2022] Avatar: The way of water was released on the 22nd of november 2022. Today is [NOW() -> DATE_TODAY] the 13th of February 2023. Therefore, [MATH(2023 > 2022) -> true] it was released last year.
>
>User: Where was the inventor of the light bulb born, and what is the last country his home country was at war with?
>
>Assistant: Thomas Edison, the inventor of the lightbulb, was born in [SEARCH(Thomas Edison birthplace) -> Milan, Ohio] Milan, Ohio. The last country the United States was at war with was [SEARCH(last country US at war with) -> Iraq] Iraq.
>
>User: USER_INPUT
>
>Assistant:
blueSGL t1_j94bno5 wrote
Let me see if I get this right.
Toolformerzero is a layer between the LLM and the user.
That layer picks up keywords, performs the search and then returns a predefined chunk formatted from the search results
Then the LLM's prompt is stuffed with that chunk and asked the question again?
and it just works?
MysteryInc152 OP t1_j94ep4b wrote
Yup. That's pretty much it lol
blueSGL t1_j94yv6s wrote
any idea how they format the search results, because out of all of them that would seem to be the most tricky. No idea if the google summery text preview contains the answer or enough context to get the answer. If it needs to actually go to the website the tool has no knowledge of how the website will be formatted or length of the site. (potential context window issues)
_Minos t1_j95amf3 wrote
Hey, creator of above implementation here.
You're right that there's lots of ways accuracy could feasibly be improved, by using more varied APIs, navigating to search results and creating embeddings of the resulting website etc. Ultimately, a lot of this kind of more advanced chaining of LLM and API requests can be done with libraries like langchain.
For this one, i wanted to show how effective a much more simple approach can be. For search results, i simply chain together the returned google "snippets" and inject the resulting string back into the prompt. Often times, this means there can actually be conflicting information, such as for example dates talking about events adjacent to but ultimately irrelevant to the search query. However, this is where GPT is generally doing an excellent job of picking out the correct bit of info, so no more sophisticated filtering or parsing by the app is required. Just giving a raw dump of the search results to the model.
pyepyepie t1_j95f3m2 wrote
I actually think your approach shows the idea better than the original paper. However, the original paper can be implemented with smaller language models which might be better for people who want to deploy it. All over, I think the application is almost trivial and I am not surprised it worked well for you (due to the crazy power of LLMs).
Great work!
yoshiwaan t1_j96uxg7 wrote
Really? As in the order of operations is: token parsing => Toolformer => LLM?
Genuine question, is the text/token parsing for queries to an LLM (eg chatgpt) performed separately and beforehand to the actual LLM being leveraged, or is the text/token parsing a part of the LLM? I figured it was the latter and you couldn’t just insert a tool there
blueSGL t1_j96yan4 wrote
sorry from what I understand it goes something like this:
LLM processes prompt, formats output as per the initial few shot demos.
This output is an intermediary step in plain text including keywords that then get picked up by Toolformer
Toolformer goes off does the search things and returns predefined chunks formatted from the search results
The prompt is then stuffed with those chunks and asked the question again with the added retrieved search context
(and I'm sure there is more pixie dust sprinkled in somewhere. )
badabummbadabing t1_j95kmxk wrote
This is absolutely wild.
imaginethezmell t1_j9fsdtm wrote
do you know how to check for these errors
i added the keys and then tried to send a prompt, and it gave that error
seems to be sending too many requests at once to the openai api hitting a rate limit, after 1 request?
> Failed to load resource: the server responded with a status of 429 ()
" An error occurred. :(
An error occurred. :(
An error occurred. :(
An error occurred. :(
An error occurred. :(
An error occurred. :(
An error occurred. :(
An error occurred. :( "
Viewing a single comment thread. View all comments