Viewing a single comment thread. View all comments

pengo t1_jechdk0 wrote

> The long and short of it being that "understanding" is never going to be the right term for us to use.

Yet still I'm going to say "Wow, ChatGPT really understands the nuances of regex xml parsing" and also say, "ChatGPT has no understanding at all of anything" and leave it to the listener to interpret each sentence correctly.

> I don't know to what degree LLMs have "latent" conceptual connectedness, or whether this is presented only in the response to prompts.

concept, n.

  1. An abstract and general idea; an abstraction.

  2. Understanding retained in the mind, from experience, reasoning and imagination

It's easy to avoid using "understanding" for being imprecise but it's impossible not to just pick other words which have the exact same problem.

1

Barton5877 t1_jee1fw4 wrote

That the definition of concept you're citing here uses the term "understanding" is incidental - clearly it's a definition of concept in the context of human reasoning.

Whatever terminology we use ultimately for the connectedness of neural networks pre-trained on language is fine by me. It should be as precise to the technology as possible whilst conveying effects of "intelligence" that are appropriate.

We're at the point now where GPT-4 seems to produce connections that come from a place that's difficult to find or reverse engineer - or perhaps which simply come from token selections that are surprising.

That's what I take away from a lot of the discussion at the moment - I have no personal insight into the model's design, or the many parts that are stitched together to make it work as it does (quoting Altman here talking to Lex).

1