Submitted by enryu42 t3_122ppu0 in MachineLearning
ThePhantomPhoton t1_jdsyzhn wrote
It’s easier to gauge the effectiveness of these large language models within the context of what they are actually doing, and that is repeating language they’ve learned elsewhere, predicated on some prompt provided by the user. They are not “reasoning,” although the language they use can lead us to believe that is the case. If you’re disappointed by their coding, you will certainly be disappointed by their mathematics.
Viewing a single comment thread. View all comments