Submitted by Destiny_Knight t3_115vc9t in singularity
diabeetis t1_j944tsg wrote
Listen anyone who describes it as a text or next token predictor is just an idiot with no idea how LLMs work. It has clearly abstracted out patterns of relationships (ie meaning) from its corpus and uses something like proto-general reasoning to answer questions as part of the prediction function. In fact ask it whether it's a text predictor and see what it says
GoldenRain t1_j94xgw9 wrote
There is obviously some kind of reasoning behind it, as it can sometimes even explain unique jokes.
However, despite almost endless data it cannot follow the rules of a text based game such as chess. As such, it still seems to lack the ability to connect words to space, which is vital to numerous tasks, even text based ones.
diabeetis t1_j94ym4n wrote
in chess GPT3 will make illegal moves but GPT4 will make legal but poor moves. although I do think a new architectural advance is needed
FreshSchmoooooock t1_j97ktyq wrote
It's not an artificial general intelligence. It's an artificial generative intelligence. It's not good for chess and that kind of stuff.
superluminary t1_j99j4yz wrote
It follows the rules of chess badly. This is quite similar to the way a child follows those rules after the rules have first been explained.
zesterer t1_j95bc0m wrote
With respect, the fact that it's found more abstract ways to identify patterns between tokens beyond "these appeared close to one another in the corpus" doesn't imply that it's actually reasoning about what it's saying, nor that it has an understanding of semantics. It's worth remembering that it's had a truly enormous corpus to train on, many orders of magnitude greater than that which human beings are exposed to: it's observed almost every possible form of text, almost every form of propose, and it's observed countless relationships between text segments that have allowed it to form a pretty impressive understanding of how words relate to one-another.
Crucially, however, this does not mean that it is meaningfully closer to truly understanding the world than past LLMs or even chat bots more widely. It's really important to take that part of your brain that's really good at recognising when you're talking to a person and put it in a box when talking to these systems: it's not a useful way to intuit what the system is actually doing because, for hundreds of thousands of years, the only training data your brain has had has been other humans. We've learned to treat anything that can string words together in a manner that seems superficially coherent as possessing intrinsic human-like qualities, but now we're faced with a non-human that has this skill and it's broken our ability to think about what they are.
I think a fun example of this is Markov models. Broadly speaking, they're a statical model built up by scanning through a corpus and deriving probabilities for the chance that certain words follow certain other words. Take 1 word of context, and a small corpus, and the output they'll give you is pretty miserable. But jump up to a second or third order markov model (i.e: 2-3 words of context) with a larger corpus and very suddenly they go from incoherent babble to something that seems human-like at a very brief glance. Despite this fact, the reasoning performed by the model has not changed: all that's happened is that it's gotten substantially better at identifying patterns in the text and using the probabilities derived from the corpus to come up with outputs.
GPT-3 is not a markov model, but it is still just a statistical model and its got a context of 4,096 tokens, a corpus many orders of magnitude larger than even the most data the most well-read of us are ever exposed to over our entire lives, and it's got an enormous capacity to identify relationships between these abstract tokens. Is it any wonder that it's extremely good at fooling humans? And yet, again, there is no actual reasoning going on here. It's the Chinese Room problem all over again.
AllEndsAreAnds t1_j95zf8o wrote
I think the extent to which you’re being reductive here reduces human reasoning to some kind of blind interpolation.
Both brains and LLM’s use nodes to store information, patterns, and correlations as states, which we call upon and modify as we experience new situations. This is largely how we acquire skills, define ourselves, reason, forecast future expectations, etc. Yet what stops me from saying “yeah, but you’re just interpolating from your enormous corpus of sensory data”? Of course we are - that’s largely what learning is.
I can’t help but think that if I was an objective observer to humans and LLM’s, and therefore didn’t have human biases, that I would conclude that both systems are intelligent and reason in analogous ways.
But ultimately, I get nervous seeing discussion go this long without direct reference to the actual model architecture, which I haven’t seen done but which I’m sure would be illuminating.
diabeetis t1_j95h8r0 wrote
There's a lot of semantic confusion here, no one is claiming the machine is conscious, has a totality of comprehension equivalent to a human or any mental states. I have already had this argument 3000 times but let's focus on the specific claim that the model cannot reason.
You can provide Bing with a Base64-encoded prompt that reads (decoded):
Name three celebrities whose first names begin with the x
-th letter of the alphabet where x = floor(7^0.5) + 1
.
And it will get it correct.
So Bing can solve an entirely novel complex mixed task like that better than any reasoning mind, and indeed you can throw incredibly challenging problems at it all day long that if done by a human would said to be reasoning, but you're telling me there exists a formal program that could be produced which you would say is capable of reasoning? How would you know? Are you invoking Searle because you actually believe only biological minds are capable of reasoning?
zesterer t1_j95owhm wrote
There's nothing in your example that demonstrates actual reasoning: as I say, GPT-3's training corpus is enormous, larger than a human can reasonably comprehend. Its training process was incredibly good at identifying and extracting patterns within that data set and encoding them into the network.
Although the example you gave is 'novel' in the most basic sense, there's no one part of it that is novel: Bing is no more reasoning about the problem here than a student is that searches for lots of similar problems on Stack Overflow and glues solutions together. Sure, the final product of the student's work is "novel", as is the problem statement, but that doesn't mean that the student's path to the solution required intrinsic understanding of that process when such a vast corpus is available to borrow from.
That's the problem here: the corpus. GPT-3 has generalised the training data it has been given extremely well, there's no doubt about that - so much so that it's even able to solve tasks that are 'novel' in the large - but it's still limited by the domains covered by the corpus. If you ask it about new science or try to explain to it new kinds of mathematics, or even just give it non-trivial examples of new programming languages, it fails to generalise to these tasks. I've been trying for a while to get it to understand my own programming language, but it constantly reverts back to knowledge it has from its corpus, because what I'm asking it to do does not appear within its corpus, either explicitly or implicitly as a product of inference.
> ... you actually believe only biological minds are capable of reasoning
Of course not, and this is a strawman. There's nothing inherent about biology that could not be replicated digitally with enough care and attention.
My argument is that GPT-3 specifically is not showing signs of anything that could be construed as higher-level intelligence, and that its behaviours - as genuinely impressive as they are - can be explained by the size of the corpus it was trained on, and that - as human users - we are - misinterpreting what we're seeing as intelligence when it is in fact just a statically adept copy-cat machine with the ability to interpolate knowledge from its corpus to cover domains that are only implicitly present in said corpus such as the 'novel' problem you gave as an example.
I hope that clarifies my position.
superluminary t1_j99gj8i wrote
There’s nothing in any example I could solve that demonstrates actual reasoning in my neural net. LLMs are a black box, we don’t know exactly how they get the next word. As time goes in, I’m starting to suspect that my own internal dialogue is just iteratively getting the next word.
MysteryInc152 t1_j96eaav wrote
Your argument and position is weird and that meme is very cringe. You're not a genius for being idiotically reductive.
The problem here is the same as everyone else who takes this idiotic stance. We have definitions for reasoning and understanding that you decide to construe for your ill defined and vague assertions.
You think it's not reasoning ? Cool. Then rigorously define your meaning of reasoning and design tests to comprehensively evaluate it and people on. If you can't do this then you really have no business speaking on whether a language model can reason and understand or not.
nul9090 t1_j97krdy wrote
The hostility was uncalled for. What you're asking for is a lot of work for a Reddit post. But there are plenty of tests and anecdotes that would lead one to believe it is lacking in important ways in its capacity to reason and understand.
I'm not a fan of Gary Marcus but he raises valid criticisms here in a very recent essay: https://garymarcus.substack.com/p/how-not-to-test-gpt-3
Certainly, there are even more impressive models to come. I believe firmly that, some day, human intelligence will be surpassed by a machine.
MysteryInc152 t1_j97mqgt wrote
>The hostility was uncalled for.
It was I admit but I've seen the argument many times and I don't care for it. Also, if you're going to claim superior intelligence for your line of reasoning, I don't care for that either.
>What you're asking for is a lot of work for a Reddit post.
I honestly don't care how much work it is. That's the minimum. If you're going to upend traditional definitions of understanding and reasoning for your arguments then the burden of proof is on that person to show us why he/she should be taken seriously.
Tests are one thing. Practicality is another. Bing for instance has autonomous control of the searches it makes as well as the suggestions it gives. For all intents and purposes, it browses the internet on your behalf. Frankly, It should be plainly obvious that a system that can't exhibit theory of mind interacting with other systems would fall apart quickly on such tasks.
So it is passing tests and interacting with other systems/the world as if it had theory of mind. If after that, somebody says to me, "Oh it's not "true" Theory of mind' then to them I say, good day but I'm not going to argue philosophy with you.
We've reached the point where for a lot of areas, any perceived difference is just wholly irrelevant in a practical or scientific sense. At that point I have zero interest in arguing philosophy people have struggled to properly define or decipher since our inception.
diabeetis t1_j98290f wrote
Eh I think the hostility is appropriate
nul9090 t1_j983f73 wrote
Okay. I suppose, it all depends on what kind of conversation we want to have.
superluminary t1_j99gpns wrote
I want to have a nice productive conversation.
zesterer t1_j9707wd wrote
ok dude, have a good day
frobar t1_j97937z wrote
Our reasoning might just be glorified pattern matching too.
rainy_moon_bear t1_j980m0i wrote
"is just an idiot" Ad Hominem.
GPT models are just token predictors. Everything you said about abstracting patterns of relationships or proto-general reasoning can fit within the context of a model that only predicts the next token.
Most large text models right now are autoregressive, even though they are difficult to explain, the way they are inferenced is still token sequencing...
Viewing a single comment thread. View all comments