Viewing a single comment thread. View all comments

zesterer t1_j95owhm wrote

There's nothing in your example that demonstrates actual reasoning: as I say, GPT-3's training corpus is enormous, larger than a human can reasonably comprehend. Its training process was incredibly good at identifying and extracting patterns within that data set and encoding them into the network.

Although the example you gave is 'novel' in the most basic sense, there's no one part of it that is novel: Bing is no more reasoning about the problem here than a student is that searches for lots of similar problems on Stack Overflow and glues solutions together. Sure, the final product of the student's work is "novel", as is the problem statement, but that doesn't mean that the student's path to the solution required intrinsic understanding of that process when such a vast corpus is available to borrow from.

That's the problem here: the corpus. GPT-3 has generalised the training data it has been given extremely well, there's no doubt about that - so much so that it's even able to solve tasks that are 'novel' in the large - but it's still limited by the domains covered by the corpus. If you ask it about new science or try to explain to it new kinds of mathematics, or even just give it non-trivial examples of new programming languages, it fails to generalise to these tasks. I've been trying for a while to get it to understand my own programming language, but it constantly reverts back to knowledge it has from its corpus, because what I'm asking it to do does not appear within its corpus, either explicitly or implicitly as a product of inference.

> ... you actually believe only biological minds are capable of reasoning

Of course not, and this is a strawman. There's nothing inherent about biology that could not be replicated digitally with enough care and attention.

My argument is that GPT-3 specifically is not showing signs of anything that could be construed as higher-level intelligence, and that its behaviours - as genuinely impressive as they are - can be explained by the size of the corpus it was trained on, and that - as human users - we are - misinterpreting what we're seeing as intelligence when it is in fact just a statically adept copy-cat machine with the ability to interpolate knowledge from its corpus to cover domains that are only implicitly present in said corpus such as the 'novel' problem you gave as an example.

I hope that clarifies my position.

1

superluminary t1_j99gj8i wrote

There’s nothing in any example I could solve that demonstrates actual reasoning in my neural net. LLMs are a black box, we don’t know exactly how they get the next word. As time goes in, I’m starting to suspect that my own internal dialogue is just iteratively getting the next word.

3

MysteryInc152 t1_j96eaav wrote

Your argument and position is weird and that meme is very cringe. You're not a genius for being idiotically reductive.

The problem here is the same as everyone else who takes this idiotic stance. We have definitions for reasoning and understanding that you decide to construe for your ill defined and vague assertions.

You think it's not reasoning ? Cool. Then rigorously define your meaning of reasoning and design tests to comprehensively evaluate it and people on. If you can't do this then you really have no business speaking on whether a language model can reason and understand or not.

2

nul9090 t1_j97krdy wrote

The hostility was uncalled for. What you're asking for is a lot of work for a Reddit post. But there are plenty of tests and anecdotes that would lead one to believe it is lacking in important ways in its capacity to reason and understand.

I'm not a fan of Gary Marcus but he raises valid criticisms here in a very recent essay: https://garymarcus.substack.com/p/how-not-to-test-gpt-3

Certainly, there are even more impressive models to come. I believe firmly that, some day, human intelligence will be surpassed by a machine.

2

MysteryInc152 t1_j97mqgt wrote

>The hostility was uncalled for.

It was I admit but I've seen the argument many times and I don't care for it. Also, if you're going to claim superior intelligence for your line of reasoning, I don't care for that either.

>What you're asking for is a lot of work for a Reddit post.

I honestly don't care how much work it is. That's the minimum. If you're going to upend traditional definitions of understanding and reasoning for your arguments then the burden of proof is on that person to show us why he/she should be taken seriously.

Tests are one thing. Practicality is another. Bing for instance has autonomous control of the searches it makes as well as the suggestions it gives. For all intents and purposes, it browses the internet on your behalf. Frankly, It should be plainly obvious that a system that can't exhibit theory of mind interacting with other systems would fall apart quickly on such tasks.

So it is passing tests and interacting with other systems/the world as if it had theory of mind. If after that, somebody says to me, "Oh it's not "true" Theory of mind' then to them I say, good day but I'm not going to argue philosophy with you.

We've reached the point where for a lot of areas, any perceived difference is just wholly irrelevant in a practical or scientific sense. At that point I have zero interest in arguing philosophy people have struggled to properly define or decipher since our inception.

3

diabeetis t1_j98290f wrote

Eh I think the hostility is appropriate

0

nul9090 t1_j983f73 wrote

Okay. I suppose, it all depends on what kind of conversation we want to have.

2