Submitted by CosmicTardigrades t3_10d3t41 in MachineLearning
suflaj t1_j4kmugm wrote
Unless the task is not present in the human language distribution it learned to mimic and in your prompt, it will not be able to do it.
While counting is one task that shows that it doesn't actually understand anything, there are many more, among those it doesn't outright refuse to answer to. Some examples are math in general (especially derivatives and integration), logic to some extent or pretty much anything too big for its memory (my assumption is it is able to fit a hundred or two hundred sentences before it forgets things).
For things not present in your prompt, it is also heavily biased. For example, even though it claims it doesn't give out opinions, it prefers Go as a programming language, AWD for cars, hydrogen and EVs for fuel technology (possibly because of its eco-terrorist stances), the color red... These biases might be preventing it from doing some tasks it usually should be able to do.
For example, if you ask it to objectively tell you what the best car is, it might say Toyota Mirai, even though it's actually a terrible car to have even in California, the best place to have one. You might be thinking that its thinking is broken, but in reality, the biases screwed it over.
Viewing a single comment thread. View all comments