Submitted by Singularian2501 t3_zm22ff in MachineLearning
waffles2go2 t1_j09j6br wrote
Oof, so LLMs use regression to figure out what's next.
If you bolt LLMs onto a system that can perform multi-step problem solving (the McGuffin of this paper) then you have a system that can "reason"....
Oof...
fooazma t1_j0acepl wrote
Why a McGuffin? The lack of multi-step problem solving is clearly limiting. Examples of what's wrong with ChatGPT are almost always examples of the lack of few-step problem solving based on factual knowledge.
In an evolutionary competition between LLMs with this capability and those without, the former will wipe the floor with the latter. Shanahan, like all GOFAI people, understands this very well.
waffles2go2 t1_j0hslfb wrote
Agree, it just lacks any nuance. "if you assume x" "then here is how you could use y"...
Also, confidentlyincorrect is pretty much every prediction in this rapidly-evolving space and if you're looking for business applications it's a cost/precision tradeoff where often the most advanced solutions lose..
Viewing a single comment thread. View all comments