Submitted by ofirpress t3_xvkhz9 in MachineLearning
Self-ask and Self-ask + Google Search
We just put out this preprint that shows that by simply using a new prompt (we call it Self-ask) you can improve the ability of GPT-3 to answer complex questions.
This prompt simply has the model ask (and answer) sub-questions before it answers the main input question.
​
Self-ask with a 1-shot prompt answering a question (using GPT-3)
The format of this prompt also allows for us to automatically parse out the subquestions and have Google answer them instead of GPT-3. This improves performance and allows this system to answer questions that GPT-3 or Google could not answer on their own.
​
Self-ask + Google Search: GPT-3 text in green, Google retrieved text in cyan.
​
Google answers this following question incorrectly:
​
But Self-ask + Google gets this right:
​
Our paper has lots more info:
The Self-ask + Google Search method is at:
https://github.com/ofirpress/self-ask/blob/main/self-ask_plus_search-engine_demo.ipynb
I'll be here to answer any questions!
RoboticAttention t1_ir1fl4p wrote
Very interesting! What other ways of augmenting AI capabilities do you see will follow? Do you think effectiveness of this suggest that symbolic approaches will be connected with NNs, for example an AI agent will be equipped with a separate mathematical theorem checker, or other whiteboard to note down intermediate calculations?