[R] LAMBADA: Backward Chaining for Automated Reasoning in Natural Language - Google Research 2022 - Significantly outperforms Chain of Thought and Select Inference in terms of prediction accuracy and proof accuracy. Submitted by Singularian2501 t3_zyeeks on December 29, 2022 at 7:45 PM in MachineLearning 11 comments 58
blueSGL t1_j27mw4b wrote on December 30, 2022 at 5:44 AM Reply to comment by Dankmemexplorer in [R] LAMBADA: Backward Chaining for Automated Reasoning in Natural Language - Google Research 2022 - Significantly outperforms Chain of Thought and Select Inference in terms of prediction accuracy and proof accuracy. by Singularian2501 https://www.reddit.com/r/GPT3/comments/uxdywn/large_language_models_are_zeroshot_reasoners/ Looks like it was 7. Permalink Parent 7
Viewing a single comment thread. View all comments