JoieDe_Vivre_ t1_j8jt9l7 wrote
Reply to comment by venustrapsflies in ChatGPT Passed a Major Medical Exam, but Just Barely | Researchers say ChatGPT is the first AI to receive a passing score for the U.S. Medical Licensing Exam, but it's still bad at math. by chrisdh79
The point they’re making is their second sentence.
If it’s correct, it doesn’t matter where it came from.
ChatGPT is just our first good stab at this kind of thing. As the models get better, they will out perform humans.
It’s hilarious to me that you spent all those words just talking shit, while entirely missing the point lol.
xxxnxxxxxxx t1_j8jzb3z wrote
If it’s ever correct, it’s by accident. The limitations listed above negate that point.
JoieDe_Vivre_ t1_j8k184o wrote
It’s literally designed to get the answer right. How is that ever “by accident”?
venustrapsflies t1_j8kck2g wrote
No, it's not at all designed to be logically correct, it's designed to appear correct based on replications of the training dataset.
One the one hand, it's pretty impressive that it can do what it does using nothing but a statistical model of language. On the other hand, it's a quite unimpressive example of artificial intelligence because it is just a statistical language model. That's why it's abysmal at even simple math and logic questions, things that computers have historically been quite good at.
Human intelligence is nothing like a statistical language model. THAT is the real point, the one that both you and the OC, and frankly much of this sub at large, aren't getting.
xxxnxxxxxxx t1_j8k2m48 wrote
No, you are missing the understanding of how language models work. They are designed to guess the next word, and they can’t do any more than that. This works because language is a subjective interface - far from logical correctness
MaesterPycell t1_j8ky26c wrote
https://en.m.wikipedia.org/wiki/Chinese_room
This is a problem or theory that addresses the issue at possibly better lengths.
Additionally, I believe recommend to most people who are interested in AI to read the Fourth Age, which is a philosophy book targeted at ai. It explains it in a nice and easier to read concept about what it is to be truly AGI and the steps we’ve made so far and will need to make.
Quick Edit: I also don’t think youre wrong, what this AI is saying it wouldn’t be able to explain but it’s learned to take the code behind it and spit out something akin to human language, no matter how garbled or incoherent that is to the machine behind it doesn’t care, as long as it suits it’s learning.
Viewing a single comment thread. View all comments