Comments

You must log in or register to comment.

Kafke t1_j4jgp3j wrote

It's failing because you're asking the ai to think. It does not think.

3

CosmicTardigrades OP t1_j4jly7d wrote

This comment just doensn't make any sense. AI does not think. AI does not talk. So what? You're still talking with it about the weather and it responds you with a word string that seems very meaningful to you. Actually I'm talking about why ChatGPT's ability is week when compared with a finite state automaton, a push-down automaton, and not to say, a Turing machine, but can still achieve such a performance in NLP tasks.

5

CosmicTardigrades OP t1_j4jlzsm wrote

I know clearly about "linear algebra, calculus, and probability." And yes, I'm treating ChatGPT like a black box: not the training algorithm as a black box but the parameters it learned from corpus as a black box. There're billions of parameters and as far as I know most AI researchers treat them as a black box too. If you know some of AI research, DL models' interpretability is a long-standing difficult problem. Shortly, they are hard to understand. Moreover, we can have some tuition about DL models: CNN's filters represent image objects' features in different levels and transformer's Q-K-V matrices are about attention. What I'm asking is why such design can outperform traditional NLP methods so much.

BTW, I'm a bit infuriated when you say I "have to read some papers." My Zotero library contains a hundred read AI papers and more importantly, I've just posted two papers I have read in this post. They give a direct explaination about why ChatGPT fails in some regex and CFG tasks. My question is just one step further after reading these two papers.

The tone in the images is just for fun because I orininally posted this as a joke to my personal circle on the social media. I do have at least CS-grad-level knowledge about how DL models work.

4

monkorn t1_j4jmu88 wrote

Don't just tell it to do it step by step, show it how to do it step by step and keep track as it goes. It will handle it then.

Prompt: I would like you to process the the following strings like this example: "1 0 1 1 0 1 0 0 0" Step 1. Start with a count of 0. Step 2: Look at the first digit in the string and remove it, it's a 1, so add that value to the count. The new count is 1. The new string is "0 1 1 0 1 0 0 0". Continue until the string is empty. Can you try on "1 0 1 1 1 0 1 0 1 0 1"?

> Start with a count of 0

> Look at the first digit in the string "1 0 1 1 1 0 1 0 1 0 1" which is 1, so add that value to the count. The new count is 1. The new string is "0 1 1 1 0 1 0 1 0 1"

> Look at the first digit in the new string, which is 0, so add that value to the count. The new count is 1. The new string is "1 1 1 0 1 0 1 0 1"

...

> Look at the first digit in the new string, which is 0, so add that value to the count. The new count is 6. The new string is "1"

> Look at the first digit in the new string, which is 1, so add that value to the count. The new count is 7. The new string is ""

> The string is now empty, the final count is 7.

7

visarga t1_j4k5jfh wrote

Wrong tool for this kind of task, it should generate a Python function which will give you the answer when evaluated on the input. And this approach would generalize better. The Turing machine approach is useful when you're dealing with concepts that don't fit well into Python code.

2

squalidaesthetics20 t1_j4kafvv wrote

I have tried this one just now and this is the response "There are 8 ones in the binary number 01010101110101010." But able to solve the following below: Numbers 1 to 10 in binary values 1 = 0001 2 = 0010 3 = 0011 4 = 0100 5 = 0101 6 = 0110 7 = 0111 8 = 1000 9 = 1001 10 = 1010

1

Space-cowboy-06 t1_j4kemiv wrote

I've seen the exact same thing when interviewing people that have a great CV and can talk at length about their experience but then can't do a very simple task if you ask them to.

2

suflaj t1_j4kmugm wrote

Unless the task is not present in the human language distribution it learned to mimic and in your prompt, it will not be able to do it.

While counting is one task that shows that it doesn't actually understand anything, there are many more, among those it doesn't outright refuse to answer to. Some examples are math in general (especially derivatives and integration), logic to some extent or pretty much anything too big for its memory (my assumption is it is able to fit a hundred or two hundred sentences before it forgets things).

For things not present in your prompt, it is also heavily biased. For example, even though it claims it doesn't give out opinions, it prefers Go as a programming language, AWD for cars, hydrogen and EVs for fuel technology (possibly because of its eco-terrorist stances), the color red... These biases might be preventing it from doing some tasks it usually should be able to do.

For example, if you ask it to objectively tell you what the best car is, it might say Toyota Mirai, even though it's actually a terrible car to have even in California, the best place to have one. You might be thinking that its thinking is broken, but in reality, the biases screwed it over.

1

curiousshortguy t1_j4mt461 wrote

Think of chatGPT as a multi-task meta-learner where the prompt you give it specifies the task. It's essentially only trained on text generation (with some fine-tuning to make it more conversational). So you need to set-up a prompt to make it generate reasonable answers. It can't think or calculate, but by showing it how to generate a right answer in the prompt, it can leverage that information to give you better answers.

1