Even_Tangerine_800
Even_Tangerine_800 t1_is9zy95 wrote
Reply to comment by Co0k1eGal3xy in [R] Mind's Eye: Grounded Language Model Reasoning through Simulation - Google Research 2022 by Singularian2501
This is what I got: GPT-3 Answer.
Apparently, the model arrives at the wrong answer without mentioning the air resistance. I have tried many times the results are consistent.
Considering the free fall rules should be encoded in some text books (which should have been included in the pre-training datasets), these results are even more striking to me.
Even_Tangerine_800 t1_isa18wf wrote
Reply to comment by Co0k1eGal3xy in [R] Mind's Eye: Grounded Language Model Reasoning through Simulation - Google Research 2022 by Singularian2501
Are the questions as simple as a = F/m = mg / m = g?
Anyways. If humans put effort into optimizing a tool for accurate simulation, we can treat it more like an alignment problem rather than pure scientific judgment.
You can update the knowledge in the physics engine if you want.