pengo

pengo t1_jechdk0 wrote

> The long and short of it being that "understanding" is never going to be the right term for us to use.

Yet still I'm going to say "Wow, ChatGPT really understands the nuances of regex xml parsing" and also say, "ChatGPT has no understanding at all of anything" and leave it to the listener to interpret each sentence correctly.

> I don't know to what degree LLMs have "latent" conceptual connectedness, or whether this is presented only in the response to prompts.

concept, n.

  1. An abstract and general idea; an abstraction.

  2. Understanding retained in the mind, from experience, reasoning and imagination

It's easy to avoid using "understanding" for being imprecise but it's impossible not to just pick other words which have the exact same problem.

1

pengo t1_je99h3k wrote

There are two meanings of understanding:

  1. My conscious sense of understanding which I can experience and I have no ability to measure in anyone else, unless someone solves the hard problem.
  2. Demonstrations of competence, which we say "show understanding", which can be measured, such as exam results. Test results might be a proxy for measuring conscious understanding in humans, but do not directly test is, and have no connection to it whatsoever in machines.

That's it. They're two different things. Two meanings of understanding. The subjective experience and the measurement of understanding.

Machines almost certainly have no consciousness, but can demonstrate understanding. There's no contradiction in that because showing understanding does not imply having (conscious) understanding. A tree falling doesn't mean someone has to experience the sensation of hearing it, that doesn't mean it didn't fall. And if you hear a recording of a tree falling, then no physical tree fell. They're simply separate things. A physical thing, and a mental state of mind. Just like conscious understanding and demonstrations of understanding.

Why pretend these are the same thing and quiz people about? Maybe the authors can write their next paper on the "debate" over whether season means a time of year or something you do with paprika.

Really sick of this fake "debate" popping up over and over.

6

pengo t1_jdt6iv2 wrote

Reply to comment by cegras in [D] GPT4 and coding problems by enryu42

> Then what you have is something that can separate content into logically similar, but orthogonal realizations.

Like a word vector? The thing every language model is based on?

1