Submitted by New_Computer3619 t3_11hxwsm in MachineLearning
The title of this post is a Tom Scott's video which I watched a while back. I tried the challenges with ChatGPT. Seem like it handle both cases very well.
I wonder how ChatGPT can infer from context like these?
​
​
Edit: I tried the same questions but in separate chats and ChatGPT messed up. Seem like ChatGPT can only analyze sentences grammatically without any "intuition" like us. Is that correct?
​
​
DSM-6 t1_javnmz2 wrote
Personally, I think the answer is existing bias in the training data.
I don’t know enough about chatgpt to state this as fact, but I think it’s safe to assume that chatgpt understands or adheres to grammar rules. I.e. nowhere in the code does it state “antecedent pronouns should refer to the subject of a sentence”
Instead I assume chatgpt grammar comes repeated convention in the training data. Enough data in which the antecedent refers to something other then the sentence object means that the “they” can refer to any of the preceding nouns. In that case “councilmen fear voilence” is a far more common sentence in the training than “protesters fear violence”
Then again your example was passive tense, so I dunno 🤷♀️.