ofirpress
ofirpress OP t1_ir4m7k0 wrote
Reply to comment by ElectronicCress3132 in [R] Combining GPT-3 with Google Search enables answering complex questions by ofirpress
LaMDA doesn't do multi-hop questions, only singlehop. They have 2 different LMs that talk to each other whereas we have just one. They finetune their model on specially-made data, we just have a prompt.
​
Our approach is inspired by LaMDA and other related amazing previous papers, but our approach is much simpler and easier to implement.
ofirpress OP t1_ir4hi1k wrote
Reply to comment by 81095 in [R] Combining GPT-3 with Google Search enables answering complex questions by ofirpress
Yup, right now if Google is wrong, the model will still take it. I'm sure there are lots of ways to improve in this direction.
ofirpress OP t1_ir4hgnc wrote
Reply to comment by TheReplier in [R] Combining GPT-3 with Google Search enables answering complex questions by ofirpress
There's no finetuning, it's just a prompt. GPT-3 is just smart enough to learn how to decompose questions based on the one example in the prompt...
ofirpress OP t1_ir3557a wrote
Reply to comment by 13ass13ass in [R] Combining GPT-3 with Google Search enables answering complex questions by ofirpress
Yup! We don't require any finetuning or special syntax, so it's super easy to extend this and play around with it!
ofirpress OP t1_ir1ha5m wrote
Reply to comment by RoboticAttention in [R] Combining GPT-3 with Google Search enables answering complex questions by ofirpress
Writing down intermediate calculations is not a concept we invented. In our paper we call this 'elicitive prompting' and mention chain-of-thought prompting and the scratchpad papers as previous examples of this.
​
I'm super excited about elicitive prompts (self-ask is in that category too)! I think they're going to enable us to get much more out of these models.
​
And yes, just like we can integrate Google Search we can also integrate lots of other systems, I'm really excited to see how this research direction develops!
ofirpress OP t1_ir9alhm wrote
Reply to comment by jbx028 in [R] Combining GPT-3 with Google Search enables answering complex questions by ofirpress
Yup it answered it correctly by 'talking things through'. Sometimes this happens automatically. A prompt like self-ask makes this happen with much much higher probability.
If you run an empirical evaluation on hundreds of questions you'll see that chain of thought and self-ask get much higher performance than not using a prompt or using a prompt that asks for the answer immediately.