Viewing a single comment thread. View all comments

visarga t1_j2cycv2 wrote

Hallucinations are the result of its training - it was trained to guess the next word. So it doesn't know what "should" come next, only what is probable. There are many approaches to fix this issue and I expect it to be a hot area of research in 2023 because generative model outputs that are not validated are worthless.

> But actually I think a better comparison may be a very schizophrenic human

GPT-3 doesn't have a set personality but it can assume any persona. You could say that makes it schizophrenic, or just an eager actor.

> No matter how many calculations we give you, it seems impossible to learn arithmetic beyond the two or three digits that you can most likely memorize.

This is so wrong. First, what about people, we are very bad at calculating in our heads, we need paper for anything longer than 2-3 digits. And second: language models can do that too - if you ask them to apply an exact algorithm, they will do math operations correctly.

The very point of this paper was that GPT-3 is good at abstraction, making it capable of solving complex problems at first sight, without any reliance on memorisation. Doing addition would be trivial after Raven's Progressive Matrices.

1