pengo
pengo t1_jechdk0 wrote
Reply to comment by Barton5877 in [R] The Debate Over Understanding in AI’s Large Language Models by currentscurrents
> The long and short of it being that "understanding" is never going to be the right term for us to use.
Yet still I'm going to say "Wow, ChatGPT really understands the nuances of regex xml parsing" and also say, "ChatGPT has no understanding at all of anything" and leave it to the listener to interpret each sentence correctly.
> I don't know to what degree LLMs have "latent" conceptual connectedness, or whether this is presented only in the response to prompts.
concept, n.
-
An abstract and general idea; an abstraction.
-
Understanding retained in the mind, from experience, reasoning and imagination
It's easy to avoid using "understanding" for being imprecise but it's impossible not to just pick other words which have the exact same problem.
pengo t1_je99h3k wrote
There are two meanings of understanding:
- My conscious sense of understanding which I can experience and I have no ability to measure in anyone else, unless someone solves the hard problem.
- Demonstrations of competence, which we say "show understanding", which can be measured, such as exam results. Test results might be a proxy for measuring conscious understanding in humans, but do not directly test is, and have no connection to it whatsoever in machines.
That's it. They're two different things. Two meanings of understanding. The subjective experience and the measurement of understanding.
Machines almost certainly have no consciousness, but can demonstrate understanding. There's no contradiction in that because showing understanding does not imply having (conscious) understanding. A tree falling doesn't mean someone has to experience the sensation of hearing it, that doesn't mean it didn't fall. And if you hear a recording of a tree falling, then no physical tree fell. They're simply separate things. A physical thing, and a mental state of mind. Just like conscious understanding and demonstrations of understanding.
Why pretend these are the same thing and quiz people about? Maybe the authors can write their next paper on the "debate" over whether season means a time of year or something you do with paprika.
Really sick of this fake "debate" popping up over and over.
pengo t1_je7vr2t wrote
Reply to comment by Puzzleheaded_Acadia1 in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
Yes, it can think critically, it just doesn't tell you whether it is or isn't at any one time.
pengo t1_jdtcoly wrote
Reply to comment by artsybashev in [D] GPT4 and coding problems by enryu42
Absolutely nonsensical take.
pengo t1_jdt6iv2 wrote
Reply to comment by cegras in [D] GPT4 and coding problems by enryu42
> Then what you have is something that can separate content into logically similar, but orthogonal realizations.
Like a word vector? The thing every language model is based on?
pengo t1_jegz2so wrote
Reply to The pause-AI petition signers are just scared of change by Current_Side_4024
this is probably the most childish post i've ever seen on the internet