Submitted by Singularian2501 t3_zm22ff in MachineLearning
fooazma t1_j0acepl wrote
Reply to comment by waffles2go2 in [R] Talking About Large Language Models - Murray Shanahan 2022 by Singularian2501
Why a McGuffin? The lack of multi-step problem solving is clearly limiting. Examples of what's wrong with ChatGPT are almost always examples of the lack of few-step problem solving based on factual knowledge.
In an evolutionary competition between LLMs with this capability and those without, the former will wipe the floor with the latter. Shanahan, like all GOFAI people, understands this very well.
waffles2go2 t1_j0hslfb wrote
Agree, it just lacks any nuance. "if you assume x" "then here is how you could use y"...
Also, confidentlyincorrect is pretty much every prediction in this rapidly-evolving space and if you're looking for business applications it's a cost/precision tradeoff where often the most advanced solutions lose..
Viewing a single comment thread. View all comments