Submitted by New_Computer3619 t3_11hxwsm in MachineLearning

The title of this post is a Tom Scott's video which I watched a while back. I tried the challenges with ChatGPT. Seem like it handle both cases very well.

I wonder how ChatGPT can infer from context like these?

​

https://preview.redd.it/wnmswh0gspla1.png?width=1914&format=png&auto=webp&v=enabled&s=0d918e030a3640fe6f737310f71f76dbb62b0886

​

https://preview.redd.it/0lms7svgspla1.png?width=1900&format=png&auto=webp&v=enabled&s=35d289ba480a8e2943aa347f516a224e5164d43f

Edit: I tried the same questions but in separate chats and ChatGPT messed up. Seem like ChatGPT can only analyze sentences grammatically without any "intuition" like us. Is that correct?

​

https://preview.redd.it/a6jx9r6btpla1.png?width=1662&format=png&auto=webp&v=enabled&s=8a4bb8c5ce0caa933c55993275c0baae1666f6d5

​

https://preview.redd.it/qqpabj5ctpla1.png?width=1524&format=png&auto=webp&v=enabled&s=ec5d69975bbd455bc2fbad03c2638636a738ca59

9

Comments

You must log in or register to comment.

currentscurrents t1_javx4pw wrote

The Winograd Schema is a test of commonsense reasoning. It's hard because it requires not just knowledge of english, but also knowledge of the real world.

But as you found, it's pretty much solved now. As of 2019 LLMs could complete it with better than 90% accuracy, which means it was actually already solved when Tom Scott made his video.

15

rpnewc t1_jawxrjh wrote

Yes ChatGPT does not have any idea about what trophy is, or a suitcase is or what brown is. But it has access to a lot of sentences with these words and hence some attributes of it. So when you ask these questions, sometimes (random sampling) it picks the correct noun as the answer, other times it picks the wrong one. Ask a logic puzzle with ten people as characters. See its reasoning capability.

7

2blazen t1_jazyryq wrote

Do you think an LLM can be taught to recognize when a question would require advanced reasoning to answer, or is it inherently impossible?

1

rpnewc t1_jb17dvp wrote

For sure it can be taught. But I don't think the way to teach it is to give it a bunch of sentences from the internet and expect it to figure out advanced reasoning. It has to be explicitly tuned into the objective. A more interesting question is, then how can we do this for all domains of knowledge in a general manner? Well, that is the question. In other words, what is that master algorithm for learning? There is one (or a collection of them) for sure, but I don't think we are much close to it. ChatGPT is simply pretending to be that system, but it's not.

1

BrotherAmazing t1_jaz5fnx wrote

This is what I came here to say.

If one just reads about how ChatGPT was trained and understands some basics of machine learning, it’s quite obvious what you say has to be true. you

−1

DSM-6 t1_javnmz2 wrote

Personally, I think the answer is existing bias in the training data.

I don’t know enough about chatgpt to state this as fact, but I think it’s safe to assume that chatgpt understands or adheres to grammar rules. I.e. nowhere in the code does it state “antecedent pronouns should refer to the subject of a sentence”

Instead I assume chatgpt grammar comes repeated convention in the training data. Enough data in which the antecedent refers to something other then the sentence object means that the “they” can refer to any of the preceding nouns. In that case “councilmen fear voilence” is a far more common sentence in the training than “protesters fear violence”

Then again your example was passive tense, so I dunno 🤷‍♀️.

5

New_Computer3619 OP t1_javqiuu wrote

I tried the same questions in separate chats as in the edited post. ChatGPT gave incorrect/unsatisfying answers this time. May be without context from previous Q&A, it can only infer using grammar rule? What do you think?

2