Lawjarp2 t1_j9liaa5 wrote
Reply to comment by VeganPizzaPie in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
No. Once an LLM gets a keyword a lot of related stuff will come up in probabilities. Also you can go backwards on reasoning. This makes it easier for an LLM to answer if trained for this exact scenario.
Viewing a single comment thread. View all comments