CosmicTardigrades
CosmicTardigrades OP t1_j4jn6ga wrote
Reply to comment by monkorn in [D] ChatGPT can't count by CosmicTardigrades
Thank you. I'll try.
It seems that we should teach it to build a Turing machine that solves the question instead of letting it comes out one (which is likely to be wrong) itself?
CosmicTardigrades OP t1_j4jlzsm wrote
Reply to comment by [deleted] in [D] ChatGPT can't count by CosmicTardigrades
I know clearly about "linear algebra, calculus, and probability." And yes, I'm treating ChatGPT like a black box: not the training algorithm as a black box but the parameters it learned from corpus as a black box. There're billions of parameters and as far as I know most AI researchers treat them as a black box too. If you know some of AI research, DL models' interpretability is a long-standing difficult problem. Shortly, they are hard to understand. Moreover, we can have some tuition about DL models: CNN's filters represent image objects' features in different levels and transformer's Q-K-V matrices are about attention. What I'm asking is why such design can outperform traditional NLP methods so much.
BTW, I'm a bit infuriated when you say I "have to read some papers." My Zotero library contains a hundred read AI papers and more importantly, I've just posted two papers I have read in this post. They give a direct explaination about why ChatGPT fails in some regex and CFG tasks. My question is just one step further after reading these two papers.
The tone in the images is just for fun because I orininally posted this as a joke to my personal circle on the social media. I do have at least CS-grad-level knowledge about how DL models work.
CosmicTardigrades OP t1_j4jly7d wrote
Reply to comment by Kafke in [D] ChatGPT can't count by CosmicTardigrades
This comment just doensn't make any sense. AI does not think. AI does not talk. So what? You're still talking with it about the weather and it responds you with a word string that seems very meaningful to you. Actually I'm talking about why ChatGPT's ability is week when compared with a finite state automaton, a push-down automaton, and not to say, a Turing machine, but can still achieve such a performance in NLP tasks.
CosmicTardigrades OP t1_j4k5ols wrote
Reply to comment by visarga in [D] ChatGPT can't count by CosmicTardigrades
Yeah. You‘re right. The essence is to construct a right model for “counting”