Submitted by RadioFreeAmerika t3_122ilav in singularity
robobub t1_jdst84e wrote
Reply to comment by 0382815 in Why is maths so hard for LLMs? by RadioFreeAmerika
Why? Each of those tokens is O(1) and it is predicting each one incrementally, taking into account the ones it has just generated. So the full answer has taken O(m) where m is the number of tokens.
If it is possible for GPT to do 1+1, it can do a large number of them incrementally. It's not smart enough to do it all the time (you'll have more success if you encourage GPT to have a train of thought reasoning) but it's possible.
Viewing a single comment thread. View all comments