Submitted by hey__bert t3_125x5oz in singularity
I keep seeing people argue that because AI systems are simply complex functions trained on large amounts of data, they are just predicting the next word it should say and they don't really "understand" what anything is. While this is technically true based on how models are currently developed, the argument makes a very obtuse assumption about what it means to understand something.
Humans are also trained on huge amounts of input data as they grow up and learn how to behave and think about the world. When we say we understand something, we mean we have many layers of knowledge/information about what that thing is. We can have a very deep understanding of a subject that we hold as a large model of information in our brain, but, in the end, that model is just made up of layers of data/facts that reference each other. All it is is layers of data - nothing more. You can drill down into the layers of any subject by asking yourself questions about what you know about it and why. Even with a human brain, it doesn't take too long to hit a wall about how much you really know, and everyone has different depths of understanding on any subject.
For example, you can ask yourself, "what is a ball?" and answer -> a ball is a sphere -> some balls can bounce -> they can be used in sports...etc. When you do this, you are just traversing through everything you can remember about balls. Current AI models do something very similar - they just lack the "depth" of knowledge the human brain has due to processing power and memory limitations in encoding so much information in multidimensional vectors. When our currently shallow machine learning models have the processing power to encode deeper understandings of any subject, asking what the computer "understands" will be completely meaningless. When you add this to the fact that people are often very wrong about what they think they understand, I see no reason that a computer couldn't "understand" anything better than a human.
Cryptizard t1_je6qdax wrote
If it has understanding, it is a strange, statistical-based understanding that doesn't align with what many people think of as rational intelligence. For instance, a LLM can learn that 2+2=4 by seeing it a bunch of times in its input. But, you can also convince it that 2+2=5 by telling it that is true enough times. It cannot take a prior rule and use it to discard future data. Eventually, new data will overwrite the old understanding.
It doesn't have the ability to take a simple logical postulate and apply it consistently to discover new things. Because there are no things that are absolutely true to a LLM. It is purely statistical, which always leads to some chance to conflict with itself ("hallucinating" they call it).
This is probably why we need a more sophisticated multi-part AI system to really achieve AGI. LLMs are great at what they do, but what they do is not everything. Language is flexible and imprecise, so statistical modeling works great for it. Other things are not, and LLMs tend to fail there.