Viewing a single comment thread. View all comments

gurenkagurenda t1_jacebvc wrote

I don’t know how you want to define “understanding” when talking about a non-sentient LLM, but in my experiments, ChatGPT consistently gets reading comprehension questions from SAT practice tests right, and it’s well known that it has passed multiple professional exams. It’s nowhere close to infallible, but you’re also underselling what it does.

4

TeaKingMac t1_jad6gd4 wrote

>it’s well known that it has passed multiple professional exams.

Well yeah. There's very clearly defined correct answers for professional exams.

When a student is writing an essay, the primary objective is creating and defending an argument. Abdicating that responsibility to ChatGPT is circumventing the entire point of the assignment

3

gurenkagurenda t1_jada8dv wrote

Sure, but that’s an entirely different argument.

1

TeaKingMac t1_jadgiv6 wrote

"quoting" ChatGPT as a source is also stupid, because it's neither a primary (best) source, or even a secondary source, like a newspaper article.

It's just a random assortment of (mostly correct) information. That's the same reason why academia doesn't currently allow Wikipedia as a source for information.

1

Amir_Kerberos t1_jaeh7pq wrote

That’s a misunderstanding of why academia frowns upon Wikipedia. The fact that it can have questionable accuracy is not the major concern, but rather that it is not a primary source

2

TeaKingMac t1_jaepy4k wrote

> it is not a primary source

AND NEITHER IS ChatGPT

No original information comes from ChatGPT. It is just a repository.

That's my point.

>it's neither a primary (best) source, or even a secondary source, like a newspaper article.

> It's just a random assortment of (mostly correct) information. That's the same reason why academia doesn't currently allow Wikipedia as a source for information

0

MysteryInc152 t1_jacx3fi wrote

>I don’t know how you want to define “understanding”

People routinely invent their own vague and ill-defined definitions of understanding, reasoning etc just so LLMs won't qualify.

2

gurenkagurenda t1_jacxqtz wrote

Yes. A little while back, I had someone use a Computerphile video showing ChatGPT missing on college level physics questions as proof that ChatGPT is incapable of comprehension. The bar at this point has been set so high that apparently only a small minority of humans are capable of understanding.

1