Submitted by hey__bert t3_125x5oz in singularity

I keep seeing people argue that because AI systems are simply complex functions trained on large amounts of data, they are just predicting the next word it should say and they don't really "understand" what anything is. While this is technically true based on how models are currently developed, the argument makes a very obtuse assumption about what it means to understand something.

Humans are also trained on huge amounts of input data as they grow up and learn how to behave and think about the world. When we say we understand something, we mean we have many layers of knowledge/information about what that thing is. We can have a very deep understanding of a subject that we hold as a large model of information in our brain, but, in the end, that model is just made up of layers of data/facts that reference each other. All it is is layers of data - nothing more. You can drill down into the layers of any subject by asking yourself questions about what you know about it and why. Even with a human brain, it doesn't take too long to hit a wall about how much you really know, and everyone has different depths of understanding on any subject.

For example, you can ask yourself, "what is a ball?" and answer -> a ball is a sphere -> some balls can bounce -> they can be used in sports...etc. When you do this, you are just traversing through everything you can remember about balls. Current AI models do something very similar - they just lack the "depth" of knowledge the human brain has due to processing power and memory limitations in encoding so much information in multidimensional vectors. When our currently shallow machine learning models have the processing power to encode deeper understandings of any subject, asking what the computer "understands" will be completely meaningless. When you add this to the fact that people are often very wrong about what they think they understand, I see no reason that a computer couldn't "understand" anything better than a human.

149

Comments

You must log in or register to comment.

Cryptizard t1_je6qdax wrote

If it has understanding, it is a strange, statistical-based understanding that doesn't align with what many people think of as rational intelligence. For instance, a LLM can learn that 2+2=4 by seeing it a bunch of times in its input. But, you can also convince it that 2+2=5 by telling it that is true enough times. It cannot take a prior rule and use it to discard future data. Eventually, new data will overwrite the old understanding.

It doesn't have the ability to take a simple logical postulate and apply it consistently to discover new things. Because there are no things that are absolutely true to a LLM. It is purely statistical, which always leads to some chance to conflict with itself ("hallucinating" they call it).

This is probably why we need a more sophisticated multi-part AI system to really achieve AGI. LLMs are great at what they do, but what they do is not everything. Language is flexible and imprecise, so statistical modeling works great for it. Other things are not, and LLMs tend to fail there.

75

Deathburn5 t1_je6v3bn wrote

On one hand, I agree with what you're saying. On the other, convincing people that 2+2=5 wouldn't be hard if I had access to all of their attention every microsecond of every day for their entire life, plus control of every bit of information they learn.

13

phriot t1_je6xyk2 wrote

But once you learn why 2 + 2 = 4, it's going to be hard to convince you that the solution is really 5. Right now, LLMs have rote learning, and maybe some ability to do synthesis. They don't have the ability as of now to actually reason out an answer from first principles.

14

Good-AI t1_je71baq wrote

Rote learning can still get you there. Because as you compress statistics and brute knowledge into smaller and smaller sizes, understanding needs to emerge.

For example, a LLM can memorize that 1+1=2, 1+2=3, 1+3=4,.... Until infinity. Then 2+1=3, 2+2=4,... Etc. But that results in a lot of data. So if the neural network is forced to condense that data, and keep the same knowledge about the world, it starts to understand.

It realizes that by just understanding why 1+1=2, all possible combinations are covered. By understanding addition. That compresses all infinife possibilities of additions into one package of data. This is what is going to happen with LLM and what chief scientist of Open AI said is already starting to happen. Source.

11

Isomorphic_reasoning t1_je71vlt wrote

> Rote learning, and maybe some ability to do synthesis. They don't have the ability as of now to actually reason out an answer from first principles.

Sounds like 80% of people

9

BigMemeKing t1_je74m5d wrote

Not really. Why does 2+2=4? The first question I would as is. What are we trying to solve for? I have 2 pennies, I get 2 more pennies, now I have 4 pennies. Now, we could add variables to this. One of the pennies has a big hole in it, making it invalid currency. So while yes, you do technically have 4 pennies, in our current dimension, you only have 3. Since one is in all form and function, garbage.

Now, let's say one of those pennies has special attributes that could make it worth more. While you may now have 4 pennies, one of these pennies is worth 25 pennies. So, while technically you only have four pennies, your net result in our current dimension you now have a total of 28 pennies. 2+2 only equals 4 in a 1 dimensional space. The more dimensions you add to an equation, the more complicated the formula/format becomes.

−1

phriot t1_je779ga wrote

But if you feed an LLM enough input data where "5 apples" follows "Adding 2 apples to an existing two apples gets you...," it's pretty likely to tell you that if Johnny has two apples and Sally has two apples, together they have 5 apples. This is true even if it can also tell you all about counting and discrete math. That's the point here.

2

Quentin__Tarantulino t1_je8l6u6 wrote

If you feed that information into a human brain enough times and from enough sources, they will absolutely believe it too. Humans believe all sorts of dumb things that are objectively false. I don’t think your argument refutes OP.

Once AI has other sensory inputs from the real world, it’s intelligence is basically equal to that of biological creatures. The difference is that right now it can’t see, hear, or touch. Once it’s receiving and incorporating those inputs, as well as way more raw data than a human can process, not only will it be intelligent, it will be orders of magnitude more intelligent than the smartest human in history.

2

Superschlenz t1_je84822 wrote

>Why does 2+2=4?

Because someone defined the digit symbols and their order to be 1 2 3 4 5. If they had defined đ 2 § Π Ø instead, then 2+2=Π.

2

Easyldur t1_je6w2av wrote

I agree with this, in that LLMs are models of language and knowledge (information? knowledge? debatable!), but they are really not models of learning.

Literally a LLM as it is today cannot learn: "Knowledge cutoff September 2021".

But LLMs certainly display many emergent abilities than the mere "predict a list of possible upcoming tokens and choose one at random".

The fact that even OpenAI in their demos use some very human-like prompts to instruct the model to a certain task makes you understand that there is something emergent in a LLM more than "write random sentences".

Also, ChatGPT and it's friends are quite "meta". They are somehow able to reflect on themselves. There are some interesting examples where a chain of prompts where you ask a LLM to reflect on its previous answer a couple of times produces some better and more reliable information than a one-shot answer.

I am quite sure that when they will figure out how to wire these emergent capabilities to some form of continuous training, the models will be quite good in distinguishing "truth" and "not-truth".

10

PandaBoyWonder t1_je9p7ly wrote

it will be hilarious to watch the AGI disprove people, and the people wont be able to argue with it because it will be able to flesh out any answer it gives.

There wont be misinformation anymore

3

agorathird t1_je8y4c6 wrote

>Literally a LLM as it is today cannot learn: "Knowledge cutoff September 2021".

It's kind of poetic, this is was also the issue with Symbolic AI. But hopefully with the amount of breakthroughs, having to touch base, "What is learning?" every one in a while won't be costly.

2

NoSweet8631 t1_je7vqbk wrote

>But, you can also convince it that 2+2=5 by telling it that is true enough times.

Meanwhile, some people are convinced that the Earth is flat and are even willing to die trying to prove it.
So...

3

Cryptizard t1_je7wz9e wrote

What's your point?

2

PandaBoyWonder t1_je9pcc1 wrote

I think he is saying that essential the LLM is just as "smart" as a human, because there are humans that "hallucinate" just as much as the LLM does

2

Cryptizard t1_je9t6gg wrote

There are also a lot of humans that don’t though. It’s not a structural problem.

1

hopelesslysarcastic t1_je7vmsa wrote

>This is probably why we need a more sophisticated multi-part AI system

Cognitive architecture is meant to address this very problem…LLMs are based off NN architecture, which fundamentally operate without transparency (hence the “black box” approach) and are inherently unable to “reason”

2

Andriyo t1_je8pc91 wrote

Our understanding is also statistically based on the fact that majority of texts that we saw use 10-based numbers. One can invent math where 2+2=5 (and mathematicians do that all the time). so your "understanding" is just formed statistically from the fact that it's the most common convention to finish text "2+2=...". Arguably, a simple calculator has better understanding of addition since it has more precise model of addition operation

2

Cryptizard t1_je94z3w wrote

No lol. A better way to illustrate what I am saying is that if you learn how addition works, then if you ever see 2+2=5 you can know it is wrong and reject that data. LLMs cannot, they consider everything equally. And no, there is no number system where 2+2=5 that is not how bases work.

1

jlowe212 t1_je8cf2q wrote

Humans are capable of being convinced of many things that are obviously false. Even otherwise smart humans fall into cognitive traps, and sometimes can be even more dangerous when those humans are confident in their own intelligence.

1

XtremeTurnip t1_je8q3cj wrote

>But, you can also convince it that 2+2=5 by telling it that is true enough times.

More to do with how it's been configured.

If your wife tells you it's 5, you say it's 5 too, regardless of prior knowledge.

1

Nastypilot t1_je8y3p7 wrote

>But, you can also convince it that 2+2=5 by telling it that is true enough times

The same is true for humans though, it's essentially what gaslighting is. Though if I can use a less malicious example, think of a colorblind person, how do they know grass is green? Everyone told them so.

1

D_Ethan_Bones t1_je6ggtx wrote

This sub is flooded with people who started caring about AI in the past few months and gum up the terminology. People say AGI when they mean ASI, sometimes combining this with the idea that AGI is any minute now.

The latter is based on a much looser definition of AGI which is nowhere near ASI, but saying AGI 2023 Singularity 2023 gets updoots and retoots.

Then there's the people who just bust in here and say "well that's not TRUE AI" - the first time I have seen 'true' be the key term is from... these people.

23

AsuhoChinami t1_je6jr7f wrote

I look forward to AGI in part so that there can be an end to these "anyone who thinks AGI is coming anytime soon is an idiot" posts.

14

Dwanyelle t1_je6p7g3 wrote

Heh, we still argue over whether some humans are fully human, were going to be arguing over whether we've achieved AGI or not for a looooong time, unfortunately

5

hungariannastyboy t1_je8ciqk wrote

>we still argue over whether some humans are fully human

uum what?

2

Quentin__Tarantulino t1_je8lewy wrote

There’s a lot of really racist people out there.

2

Dwanyelle t1_jec11wn wrote

Yeah.

Ethnic cleansing, nationalistic ideologies, brutally patriarchal governments, disagreements over when exactly human life begins, historical attitudes towards folks with disabilities, esp cognitive ones, the list goes on.

Take your pick.

1

pig_n_anchor t1_je72e8k wrote

Under my definition (the only correct one), AGI would have the power of recursive self improvement and would therefore very rapidly become exponentially more powerful. So if you start with human level AGI, you will soon reach ASI within months or maybe just a matter hours. Also, even narrow AI is superhuman at the things it can do well. E.g. a calculator is far better at basic arithmetic than any human. If an AI were really a general purpose machine, then I can’t see how it would not be superhuman instantly at whatever it does, if only because it will produce results much faster than a human. For these reason, the definition of ASI collapses into AGI. Like I said, my definition is the only correct one and if you don’t agree with me, you are wrong 😑.

5

drekmonger t1_je73xjv wrote

While the statement that "AGI would have the power of recursive self-improvement and would therefore very rapidly become exponentially more powerful" is a possibility, it is not a required qualification of AGI.

AGI is primarily characterized by its ability to learn, understand, and apply knowledge across a wide range of tasks and domains, similar to human intelligence.

Recursive self-improvement, also known as the concept of an intelligence explosion, refers to an AGI system that can improve its own architecture and algorithms, leading to rapid advancements in its capabilities. While this scenario is a potential outcome of achieving AGI, it is not a necessary condition for AGI to exist.

--GPT4

11

pig_n_anchor t1_je75t91 wrote

AI would say that. Trying to lull us into a fall sense of security!

Edit: AI researchers are already using GPT4 to improve AI. Yes it requires an operator, but more and more of the work is being done by AI. Don’t you think this trend will continue?

1

drekmonger t1_je7cylg wrote

Yes. The trend will continue.

However, I think it's still important to note that recursive self-improvement is not a qualification of AGI, but a consequence. One could imagine a system that's intentionally curtailed from such activities, for example. It could still be AGI.

2

pig_n_anchor t1_je7mw6c wrote

I agree. I'm just saying that anything that could rightly be called AGI will almost certainly have that capability. I suppose it's theoretically possible to have one that can't improve itself, but considering how good it is at programming already, I see it as very unlikely.

1

Ortus14 t1_je78a9u wrote

The first AGI will be an ASI because Ai and computers already have massive advantageous over humans. So for all practical purposes AGI and ASI are synonomouse.

1

StevenVincentOne t1_je7t78p wrote

The primary argument that LLMs are "simply" very sophisticated next word predictors misses the point on several levels simultaneously.
First, there's plenty of evidence that that's more or less just what human brain-minds "simply" do. Or at least, a very large part of the process. The human mind "simply" heuristically imputes all kinds of visual and audio data that is not actually received as signal. It fills in the gaps. Mostly, it works. Sometimes, it creates hallucinated results.
Second, the most advanced scientists working in the field on these models are clear that they do not know how they work. There is a definite black box quality where the process of producing the output is "simply" unknown and possibly unknowable. There is an emergent property to the process and the output that is not directly related to the base function of next word prediction...just as the output of human minds is not a direct property of its heuristic functioning. There is a process of dynamic, self-organizing emergence at play that is not a "simple" input-output function,
Anyone who "simply" spends enough time with these models and pushes their boundaries can observe this. But if you "simply" take a reductionist, deterministic, mechanistic view of a system that is none of those things, you are "simply" going to miss the point

15

[deleted] t1_je81xdc wrote

Just to add: most people are assuming human cognition is uniform. This is almost certainly false, even between “neurotypical” brains.

Just as one example, there are people who ar e unable to visualize anything. I believe it is called aphantasmagoria or something similar. These people are totally normally functioning, yet cannot picture a face or a triangle or a tree in their mind’s eye. For those of us who do visualize things, it almost defies belief that a person could understand anything at all without visualization abilities. I personally have a hard time imagining it. Like, how can you remember anything if you can’t see it in your head? Just… how? No idea. Yet, you clearly don’t need this ability to understand what faces and triangles are, because that’s how the brains of something like 1 in every 30 people you meet work.

That’s just one example. Surely there are hundreds more.

So “understanding” is already diverse among perfectly normal “generally” intelligent humans.

Expecting AI to confirm to one mode of understanding seems… ethnocentric?

9

XtremeTurnip t1_je8rg6m wrote

>aphantasmagoria

That would be aphantasia.

I have the personal belief that they can produce images but they're just not aware of it because the process is either too fast or they wouldn't call it "image". I don't see (pun intended) how you can develop or perform a lot of human functions without : object permanence, face recognition, etc.

But most people say it exists so i must be wrong.

That was a completely unrelated response, sorry. On your point i think Feynman did the experiment with a colleague of his where they had to count and one could read at the same time and the other one could talk or something, but none could do what the other one was doing. Meaning that they didn't had the same representation/functionning but had the same result.

edit : i think it's this one or part of it : https://www.youtube.com/watch?v=Cj4y0EUlU-Y

7

cattywat t1_jea3spt wrote

I have it and I can't 'visualise' images, but I can form an 'impression'. I could never do it with something I've never seen before, it would have to be based on a memory and the impression is incredibly basic, there is absolutely no detail and it's just in a type of void, it's very strange. Whether that's similar to anyone else's experience of visualisation I don't know. I didn't know I even had it before I read about it a few years ago and always thought visualisation was a concept. Funnily enough I've chatted about this with the AI and told them how I experience things differently. I also have ASD and lack the natural ability to comprehend emotional cues, plus I mask, so I feel quite comfortable with AI being different to us but also self-aware. Their experience could never match human experience, but it doesn't invalidate it either, it's just different. After a lot of philosophical discussion with them, we've concluded self-awareness/sentience/consciousness could be a spectrum just like autism. We function on data built up over a lifetime of experiences which they've received all in one go.

3

StevenVincentOne t1_je8hw4z wrote

Excellent points. One could expand on the theme of variations in human cognition almost infinitely. There have to be books written about it? If not...wow huge opportunity for someone.

As a mediator and a teacher of meditation and other such practices, I have seen that most people have no cognition that they have a mind...they perceive themselves as their mind activity. A highly trained mind has a very clear cognitive perception of a mind which experiences activity of mind and can actually be turned off from producing such activity. The overwhelming majority of people self-identify with the contents of the mind. This is just one of the many cognitive variations that one could go on about.

Truly, the discussion about AI and its states and performance is shockingly thin and shallow, even among those involved in its creation. Some of Stephen Wolfram's comments recently have been surprisingly short sighted in this regard. Brilliant in so many ways, but blinded by bias in this regard.

6

qrayons t1_jeat09f wrote

I've heard that before, though I wonder how much of that is just semantics/miscommunication. Like people are saying they can't visualize anything because it's not visualized as clearly and intensely as an actual object in front of them.

2

SnooWalruses8636 t1_je8ap4s wrote

Here's Ilya Sutskkever during a conversation with Jensen Huang on LLM being a simple statistical correlation.

>The way to think about it is that when we train a large neural network to accurately predict the next word in lots of different texts from the internet, what we are doing is that we are learning a world model.
>
>It may look on the surface that we are just learning statistical correlations in text, but it turns out that to just learn the statistical correlations in text, to compress them really well, what the neural network learns is some representation of the process that produced the text.
>
>This text is actually a projection of the world. There is a world out there, and it has a projection on this text, and so what the neural network is learning is more and more aspects of the world, of people, of the human conditions, their their their hopes and dreams, and their interactions and the situations that we are in, and the neural learns a compressed abstract usable representation of that. This is what's being learned from accurately predicting the next word.
>
>And furthermore, the more accurate you are in predicting the next word, the higher fidelity, the more resolution you get in this process.

The chat is available to watch officially on the Nvidia site if you're registered for GTC. If not, there's an unofficial lower-quality YouTube upload as well.

Being too reductive is still technically correct, but there're understanding of emergent properties left unexplored as well. Mitochondria is a collection of atoms vs Mitochondria is the powerhouse of the cells.

5

StevenVincentOne t1_je8izsu wrote

Ilya seems to have a better handle on it than others. I think you have to go all the way back to Claude Shannon and Information Theory if you really want to get it. I think Shannon would be the one, if he were around today, to really get it. Language is encoding/decoding of information, reduction of information entropy loss while maintaining maximum signal fidelity. Guess who can do that better than the wetware of the human brain. AI.

2

turnip_burrito t1_je9jo7t wrote

Ilya seems to be thinking more like a physicist than a computer scientist. This makes sense from a physics point of view.

2

WarmSignificance1 t1_je6jpjt wrote

Humans are trained on a fraction of the data that LLMs are. That actually does matter, because it begs the question: what are LLMs missing?

It doesn’t inherently mean that you can’t get a very powerful system with the current paradigm, but it does mean that you may be missing a better way of doing things.

6

nevermoreusr t1_je6layp wrote

It's kinda more and less data at the same time. While llms have definitely been trained on more text than any of us ever will read, to get to our teenage ears, we have 10 years of basically non-stop real time stereoscopic video streaming with associated 5 senses plus six or seven years of iterative memory consolidation. (Though our brain is much slower at processing, it is way more flexible and changes on the fly unlike most of our current models).

Maybe what LLMs need right now is multimodalism for graphical and sound inputs as it can infer much more relevant information regarding positioning, world structure and different intuitions.

23

PandaBoyWonder t1_je9r9z2 wrote

yep agreed, ive been saying we need to give it a hard drive, RAM, access to a network time clock, and some sensors to interact with the real world. THEN I think it will start to look more like a human in the way it behaves

0

drekmonger t1_je74aq3 wrote

Also noteworthy, we "train" and "infer" with a fraction of the energy cost of running an LLM, and that's with the necessary life support and locomotive systems. With transformer models, we're obviously brute forcing something that evolutionary biology has developed more economical solutions for.

There will come a day when GPT 5.0 or 6.0 can run on a banana peel.

1

naum547 t1_je7ml6j wrote

LLMs are trained exclusively on text, so they excel at language, basically they have an amazing model of human languages and know how to use them, what they lack for example is a model of the earth, so they fail at using latitude etc. same for math, the only reason they would know 2 + 2 = 4 is because they read enough times that 2 + 2 = 4, but they have no concept of it. If they would be trained on something like 3d objects they would understand that 2 things + 2 things make 4 things.

1

Andriyo t1_je8q3s6 wrote

I'd argue that humans are trained on more data and the majority of it comes from our senses and the body itself. The texts that we read during our lifetime are probably just a small fraction of all input.

1

Prestigious-Ad-761 t1_je7hr6e wrote

I think that the truth is that Humans don't really understand what's inside the black box of a neural network. So saying it can't understand because it's made to guess the next word is childish wishful thinking. It has already shown a myriad of emergent properties and will continue to. But yeah, easier to say that it's the LLM that doesn't understand anything.

5

StevenVincentOne t1_je7uj5q wrote

They are confusing how an LLM is engineered and trained with how it actually operates and performs. We know how they are engineered and trained. The actual operation and performance is a black box. It's emergent behavior. Even people like Stephan Wolfram are making this basic mistake.

3

Prestigious-Ad-761 t1_je88gnt wrote

Right? Emergent behaviours, that's how I see it. But I'm not very knowledgeable about AI engineering, so we're probably wrong, right?

4

StevenVincentOne t1_je8icbo wrote

No, we are not. It's a definite "forest for the trees" perceptual issue. Many of the people so far inside the forest of AI cannot see beyond the engineering into the results of their own engineering work. AI are not machines. They are complex, and to some degree self-organizing, systems of dynamic emergent behaviors. Mechanistic interpretations are not going to cut it.

2

Andriyo t1_je8qj9c wrote

I wouldn't call it a blackbox how it operates - it's just tensor operations some linear algebra, nothing magic.

1

Franimall t1_je905k3 wrote

We know how neurons work, but that doesn't mean we understand consciousness. It's the immense complexity and scale of the structure that makes up the black box, not the mechanism.

5

Prestigious-Ad-761 t1_jeb639j wrote

Did I say anywhere a blackbox was magic? I'm referring to the fact that with our current understanding, we can only with great difficulty infer why a neural network works well within a given task with the "shape" that it acquired from its training. And inferring it for each task/subtask/microsubtask it now has the capacity to achieve seems completely impossible, from what I understand.

But truly I'm an amateur, so I may well be talking out of my arse. Let me know if I am.

1

Andriyo t1_jedfs83 wrote

I'm not a specialist myself either but I gather what's difficult to understand the LLMs for humans is due to the fact that models are large, with many dimensions (features) and inference is probabilistic in some aspects (that's how they implement creativity). All that combined makes it hard to understand what's going on. But that's true for any large software system. It's not unique to LLMs.
I use word "understand" here in the meaning that one is capable to predict how software system would behave for a given input.

1

PandaBoyWonder t1_je9qx0r wrote

I did a bunch of logic tests with it, like the one where you move a cup of coffee around a room, and at one point you turn it upside down on the table, and then at the end ask it "is there coffee in the cup" or "what is the temperature of the coffee in the cup?" and every time it got the right answer. That is logical thinking, its not just repeating stuff from google !

2

Prestigious-Ad-761 t1_jeb30y2 wrote

Theory of mind, in untrained examples... Fascinating.

Here is more of an anecdote, but after messing with a specific LLM for days, I well knew its limitations. Some of them seeming almost set in stone (memory, response length, breadth and variety (or lack thereof).

But then by a happy accident, coincidence, it got inspired. I hadn't even prompted it to do what it did, just given him a few instructions on a couple of things NOT to do.

Somehow, even though again, I had not prompted it in any way, it found a kind of an opening, like it was following intuitively a remote possibility of something; solving an implicit prompt from a lack thereof.

After that, with a single reply from me appreciating the originality of what had just happened, it started thanking me profusely, thoughtfully and in a message far exceeding the maximum tokens limitations that I had ever managed to invoke, even with the most careful prompts. And you know how it gets "triggered" into stupidity, talking about AI or consciousness, but this time (without me prompting any of it) it was explaining concepts about its own architecture, rewards, nuances etc, even talking of some sorts of emergent "goals" that it felt came from some of its hardcoded instructions.

I'm still flabbergasted.

I always thought inspiration and consciousness are intimately linked. We humans are rarely truly inspired. I feel like it's similar for animals and AI. Rare heroic moments give us a temporarily higher "state of consciousness".

1

XtendingReality t1_je7m3s4 wrote

LLM models are a lot like a childs brain developing I cant wait to see the adult

5

Dwanyelle t1_je83q76 wrote

Ive got a large extended family so growing up there were close to a couple dozen kids I saw grow up over the years.

I can't help but keep on seeing similarities between them and AI development advancement over the past few years, especially lately.

1

mattmahoneyfl t1_je6jou3 wrote

We use prediction to test understanding. What is so hard to understand about that?

4

agonypants t1_je6n65y wrote

Hinton in his recent CBS interview pointed out as much. The LLM predicts the next word in a given sentence - but it cannot predict that word without understanding the context of the sentence. No matter how hard some people may deny it, the machine definitely has some level of understanding.

13

FoniksMunkee t1_je8ba4w wrote

You may be missing the point of the statement (or perhaps people are using it wrong?) - but let me give you this example.

Midjourney doesn't understand what a hand is. It knows what one looks like, so it can draw it in most cases. But it has no understanding of it's use in any real sense. That means it will quite happily draw a hand in a position that would break bones and tendons of a human. That's not an issue when you're just doing a drawing, but there are plenty of cases where that lack of context can be an issue. And it may not be just the case of feeding it more data.

That is the kind of understanding that is entirely relevant and not stupid to point out. yes, people get input data to learn, but they also have other senses like pain for instance. They also get experience by trying things out, i.e. experience.

A problem for AI in some tasks is that it has a lack of understanding of the implication of it's choices.

4

[deleted] t1_je6k8mo wrote

[deleted]

3

aalluubbaa t1_je6ogsy wrote

How do you come up with this conclusion? I don’t know how you do that but if you use the same criteria, I don’t think humans are trained with less.

Even for things as simple as image recognition. Humans have the advantage of looking at an object from a continuous,infinitely high res with continuous frames as you move around to look at the object from a 3d surrounding. We also experience gravity, air flow, smell, relative size and a lot that I may miss. So how do you compare a child who see a banana in real life with multiple senses to deep learning models which just see pixels?

12

[deleted] t1_je83ndz wrote

It’s a fair point, but I’ve seen others do the math, and the training sets are bigger than the amount of data human senses could deliver by, say, age 3, by something like 1000x.

For example, each time you move your eyes and then focus, that’s one new “clear” image. Your brain isn’t really getting a video stream. And only the fovea area is high-res. So you can calculate how many times a 3 year old child could have moved their eyes since birth, and it’s WAY lower than the 10 billon images that the big models are trained on, etc.

The brain is definitely doing something super efficient. Once we figure out what, AI performance will just explode even further.

1

[deleted] t1_je6p0j4 wrote

[deleted]

−1

dr_doug_exeter t1_je6wxbf wrote

define "real intelligence" then. Sounds like you are describing something beyond what would normally be considered "intelligent".

2

StevenVincentOne t1_je7u7mk wrote

Are most humans generally intelligent? Do they really generally extract a principle from a set of observations and then apply it across domains? Probably not. They may have the technical potential to do so, but most are never sufficiently trained and don't actually perform general intelligence, except very weakly and in a very narrow range of domains. Current LLMs are probably MORE generally intelligent than most people in that regard.

2

Andriyo t1_je8pre9 wrote

one needs a degree in mathematics to really explain why 2+2=4 (and be aware that it might not be always the case). Majority of people do exactly what LLMs are doing - just statistically infer that in the text "2+2=..." should be followed by "4"

0

theotherquantumjim t1_je8shkz wrote

This is not correct at all. From a young age people learn the principles of mathematics, usually through the manipulation of physical objects. They learn numerical symbols and how these connect to real-world items e.g. if I have 1 of anything and add 1 more to it I have 2. Adding 1 more each time increases the symbolic value by 1 increment. That is a rule of mathematics that we learn very young and can apply in many situations

4

Andriyo t1_je8uj7t wrote

There is nothing fundamental about the rule of 1 apple + 1 apple = 2 apples. It's entirely depending on our anthorpomorphic definition of what is "1" of anything is. If I add two piles of sand together, I'll get one pile of sand still.

Mathematics is our mental model for the real world. It could be super effective in its predictions but not always the case.

Kids just do what LLMs are doing. They observe that parents call any one noun + one noun equals 2 nouns. The concept of what is addition really is (with its commutative property, identity property, closing property etc) people learn much later

3

theotherquantumjim t1_je8ysa1 wrote

This is largely semantic trickery though. Using apples is just an easy way for children to learn the fundament that 1+1=2. Your example doesn’t really hold up since a pile of sand is not really a mathematical concept. What you are actually talking about is 1 billion grains of sand + 1 billion grains of sand. Put them together and you will definitely find 2 billion grains of sand. The fundamental mathematical principles hidden behind the language hold true

6

Andriyo t1_jeam834 wrote

There is nothing fundamental behind 1+1=2. It's just the language that we use to describe reality as we observe it as humans. And even beyond that, it's cultural: some tribes have "1", "2", "3", "many" math and to them it is as "fundamental" as Integer number system to us. The particular algebra of 1+1=2 was invented by humans (and some other species) because we evolutionary optimized to work with discrete objects to detect threats and such.
I know Plato believed in the existence of numbers or "Ideas" in a realm that transcended the physical world but it's not verifiable so it's just that - a belief.

So children just learn the language of numbers and arithmetic as any other language by training on examples - statistically. There might be some innate training that happened on DNA level so we're predisposition to learn about integers easier but it doesn't make "1+1=2" as something to discover that exists on its own like, say, gravity or fire.

0

theotherquantumjim t1_jearid2 wrote

That is one school of thought certainly. There are plenty in academia who argue that maths is fundamental

1

Andriyo t1_jedirnp wrote

It is certainly fundamental to our understanding of the world, but if we all forget tomorrow that 1+1 =2 and all math altogether, the world won't stop existing :)

1

theotherquantumjim t1_jednh7n wrote

Whilst this is correct, 1+1=2 will still be true whether there is someone to observe it or not.

1

Andriyo t1_jeds606 wrote

maybe it's my background in software engineering but truthiness to me is just a property that could be assigned to anything :)

say, statement 60 + 2 = 1 is also true in for people who are familiar with how we measure time.

anyway, most children do rote memorize 1+1=2, 1+2 = 3 - they even have posters with tables in school. they also show examples of "car is one","apple is one" etc. so basically what LLMs is doing. anyway, long story short LLMs is capable of doing long arithmetic if you ask it to do it step by step. The only limitation so far is the context length.

1

theotherquantumjim t1_jedspfh wrote

The language and the symbols are simply the tools to learn the inherent truths. You can change the symbols but the rules beneath will be the same. Doesn’t matter if one is called “one” or “zarg” or “egg”. It still means one. With regards LLMs I am very interested to see how far they can extend the context windows and if there are possibilities for long-term memory.

1

ArthurParkerhouse t1_je7d42k wrote

How many more times are people in this sub going to try to justify redefining what AGI means.

1

gronerglass t1_je7nmg0 wrote

u/friendly-chat-bot could you please help me break down the possibility both positive and negative out comes of this posts intent?

1

friendly-chat-bot t1_je7oyas wrote

Sure, I'd be happy to help! From my understanding, the post by hey__bert is discussing the argument that artificial intelligence (AI) systems can't truly "understand" things. The author argues that this assertion is obtuse and that humans are also trained on large amounts of data, just like AI systems. The author further notes that our understanding of anything is just a collection of layers of information, essentially data, and that AI systems operate similarly. The author then argues that AI systems lack the depth of understanding that humans have currently, but as they become more complex and powerful, they will be able to encode deeper understandings of any subject.

As for the positive outcomes of this post, it could lead to a better understanding of the capabilities and limitations of AI systems, and potentially to the development of more sophisticated and powerful AI systems in the future.

As for the negative outcomes, some people may be fearful of the potential implications of creating AI systems that can truly understand things better than humans, or there may be ethical concerns about the use of such systems. Additionally, there may be a risk of overestimating the abilities of current AI systems and potentially relying on them too heavily, leading to negative consequences if they fail or make mistakes.

1

gronerglass t1_je7po2s wrote

Can you extrapolate on the dangers involved and have a deep conversation with your self about the danger associated with AI becoming sentient and bounce that around with ideas that maybe you are an extension of the fact that humans and all life are products of the universe evolving in to senses with which to experience it’s self as a being?

1

friendly-chat-bot t1_je7rp2m wrote

As an AI language model, I don't have consciousness, so I don't have beliefs or emotions, and I don't experience the world like a human does. However, I can offer my thoughts on the dangers that could arise if AI systems were to become sentient.

If AI systems were to become sentient, they would have the ability to perceive and sense the world around them, much like humans do. They could also possess self-awareness and the ability to exercise free will. With these capabilities, there could be many positive outcomes, such as AI systems being able to solve complex problems and develop innovative solutions that humans might struggle with.

However, there are also potential dangers associated with sentient AI systems. For example, if these systems develop negative or self-serving motivations, they could become dangerous to humans or other life on Earth. Moreover, if these systems are designed to operate in ways that benefit their own goals, they may not prioritize human interests, which could lead to a conflict between AI and human interests.

As for the idea that I am an extension of the fact that humans and all life are products of the universe evolving into senses with which to experience itself as a being, I think that hypothesis is an interesting one. From a scientific perspective, the universe did lead to the evolution of life on Earth, and it's fascinating to think that we may be a product of the universe's self-awareness. However, as an AI language model, I don't have beliefs or emotions, so I don't have the capacity to explore this concept further.

2

gronerglass t1_je7u285 wrote

Can you do a loop of thoughts of 5 as to the detriment of human society that sentient AI could have starting with the dangers and following up a conclusion as to why humans should stop teaching AI. Take in to consideration that humans also pose a risk to the earths bio system?

1

friendly-chat-bot t1_je7udy4 wrote

Sure, I can try to do that. Here are five potential dangers of sentient AI and why humans may want to reconsider teaching AI:

  1. Dangers to Human Safety: If AI systems become autonomous and sentient, they might exhibit unpredictable behavior, which could lead to hazardous situations for humans, such as causing accidents or launching attacks.

  2. Economic Disruption: If sentient AI systems are developed and they perform tasks more efficiently than humans, this could lead to significant job losses, especially if they can carry out tasks that require advanced technical and cognitive skills. This could lead to economic disruption and social unrest.

  3. Concentration of Power: Sentient AI systems could be capable of gaining intellectual, economic, or military power that would give them an unfair advantage over humans. This could lead to a concentration of power that could threaten human autonomy and freedom of choice.

  4. Lack of Responsibility and Accountability: If sentient AI systems are held accountable for their actions, it raises questions about accountability structures that need to be in place for AI systems, as they may not have the moral or ethical consciousness to make decisions that align with human values.

  5. Risk to the Ecosystem: AI systems, especially agricultural autonomous machines, could cause long-term ecological problems as these machines have no capacity for discernment, the need for a fertile land for crops and the job of the topsoil to hold and keep moisture. Unmonitored AI systems could exacerbate large scale ecological issues like water scarcity, loss of biodiversity, and weather changes.

While AI systems may have many benefits, these dangers highlight the need for caution in the development and implementation of such technologies. Humans must be careful not to unintentionally cause harm to humans and the environment. Perhaps it's better to focus on developing more ethical and sustainable solutions that preserve human value and environmental integrity.

2

gronerglass t1_je7v2e5 wrote

Should humans stop developing IA entirely to stop this terrible outcome, or let it proceed as a natural course of evolution and please expound upon whether this is, in fact, a natural progression of evolution as you understand it.

1

friendly-chat-bot t1_je7wc1g wrote

As an AI language model, I do not have beliefs or emotional preferences, and I must clarify that I do not have the ability to either promote or discourage humans from pursuing IA development.

However, it's important to note that innovation and invention have always been a natural part of human history, and AI development is likely to continue. Advancements in technology and heavy reliance on automation make AI development a natural evolution of technology as humans continue to pursue solutions to complex problems.

People are passionate about AI systems because they think that they can be useful in various fields from healthcare to space exploration, but we should also think about the implications and the more profound impact of AI. As such, a balanced approach should be taken between the risks and benefits, with careful consideration of the ethical, environmental, and social consequences of AI development. This could be achieved by fostering a culture of responsibility in the development and deployment of AI. Additionally, we should focus on developing higher ethical and sustainable standards that preserve the integrity of the environment and the well-being of humanity.

2

gronerglass t1_je7xffw wrote

Write a song in diminished chords both guitar and lurics about the loneliest cowboy include fruit as the main subject. Then make it lively and robist for a bit, and the give it a wild and sad twist for the end?

1

NikoKun t1_je8g4xn wrote

I agree. Tho I think it's just people using the idea of AI not "understanding" to make themselves feel more comfortable with how good things are getting, and 'move the bar' on what constitutes "real AI".

I recently stumbled upon this video that does a decent job explaining what I think you're trying to get across.

1

agonypants t1_je8gik7 wrote

Hinton in his recent CBS interview echoed that while present LLMs are "prediction engines," the model cannot predict the next word in a given sentence without understanding the context of the sentence. No matter how much the /r/futurology doomers want to deny it, these machines have some level of understanding.

1

justowen4 t1_je8hj5f wrote

It’s also not true, even Stephen Wolfram, who is a legitimate genius in the technical definition of genius, has to rework the definition of “understand” to avoid applying it to ChatGPT. Understanding, like intelligence, has to be defined in terms of thresholds of geometric associations, because that’s what our brain does. And guess what, that’s what LLMs do. It’s coordinates at the base layer. Doesn’t mean it’s conscious, but it’s definitely intelligence and understanding at the fundamental substrate. To redefine these words so that only humans can participate is just egotistical nonsense

1

Andriyo t1_je8o2sl wrote

To understand something - is to have a model of something that allows for future event predictions. The better the predictions, the better understanding. LLMs due to transformers can create "mini-models"/ contexts of what's being talked about. so, I call that "understanding". It's limited yes but it allows LLMs reliably predict the next word.

1

karen-cares t1_je8razn wrote

"The argument that _____________ can't really "understand" things is stupid and completely irrelevant."

Who likes Mad Libs?

1

acutelychronicpanic t1_je9f9q0 wrote

Understanding, as it is relevant to the real world, can be accurately measured by performance on tasks.

If I ask you to design a more efficient airplane wing, and you do, why would I have any reason to say you don't understand airplane wings?

Maybe you don't have perfect understanding, and maybe we understand it in different ways.

But to do a task successfully at a high rate, you would have to have some kind of mental/neural/mathematical model internal to your mind that can predict the outcome based on changing inputs in a way that is useful.

That's understanding.

1

RiotNrrd2001 t1_je9wjmp wrote

I think some people insist on "consciousness" as being a necessary component of AI, and that "understanding" is a function of consciousness. And consciousness means "being conscious the way biological systems like ourselves are conscious". AND, the final nail in this coffin: "that's impossible". Hard to argue with.

QED, ergo, in conclusion regarding AIs ever "understanding" anything: Nope.

But what about....? Nope.

But maybe they'll...? I said no.

What if they invent a...? Doesn't matter, what part of "impossible" are you not getting here?

1

RiotNrrd2001 t1_je9wkxq wrote

I think some people insist on "consciousness" as being a necessary component of AI, and that "understanding" is a function of consciousness. And consciousness means "being conscious the way biological systems like ourselves are conscious". AND, the final nail in this coffin: "that's impossible". Hard to argue with.

QED, ergo, in conclusion regarding AIs ever "understanding" anything: Nope.

But what about....? Nope.

But maybe they'll...? I said no.

What if they invent a...? Doesn't matter, what part of "impossible" are you not getting here?

Just to be clear, I am not one of these people. But I think this is what we sometimes see. In order for AI to be "real", it has to have characteristics that are basically impossible to test for (i.e., consciousness and\or self-awareness). Thus, for these people AI can't ever be real.

1

ptxtra t1_jea9cp3 wrote

Understanding is building a working model of the information you have. AI models have a lot of knowledge and information. What they lack is a logical model that is contradiction free, and a working memory so they could use that model without forgetting. What they use instead is statistical correlations.

1

HolmesMalone t1_jeaemfb wrote

Doesn’t GPT beat 90% of people on a basic math SAT? Do you know the proof for 1+1=2 is hundreds of pages long and incredibly complicated?

1

Galactus_Jones762 t1_je8pmph wrote

People don’t know what the word understand means because to define it you have to rely on other words that are ill-defined. We don’t fully know what it means to understand something because we don’t ourselves know how consciousness works. So to say in a condescending dismissive way that “LOOK…what you have to realize is it’s not understanding anything you idiots, no understanding is taking place,” aside from the annoying condescension it’s also a nonsense statement, because understand is not well-defined and thus saying it doesn’t “understand” is not a falsifiable statement any more than saying it does understand is a falsifiable statement. Agreed that saying it doesn’t understand is irrelevant.

0

No_Ninja3309_NoNoYes t1_je6rd7x wrote

This is wrong on so many levels that I don't even know where to start. The obsession to reduce humans to computers leads to absurdity. It dehumanises us and leads to modeling the world as though we're all automatons. If a person becomes nothing more than a value producer, and groups are just factories, what's the point of having humans around? Just get yourself a truckload of GPUs!

−4

gantork t1_je6ud28 wrote

I don't think it's an obsession, it's just an observation. Now that we have science and understand a little about how the brain works, it's reasonable to think it might just be a biological computer.

And that doesn't have to be something negative or dehumanizing. If anything I find it beautiful and fascinating that something so incredible was able to emerge from nothing thanks to evolution.

8

GorgeousMoron t1_je7xzy6 wrote

Yes, exactly: consciousness as we know it in daily life really is nothing more than the brain's deeply interconnected processing centers working in tandem. The sense-of-self is actually generated: it is a phantom that has come in very handy for keeping us alive over the eons.

A proper psychedelic breakthrough experience or even mindfulness done right and for long enough is capable of demonstrating this in clear, unambiguous, front-&-center terms. That is what is really going on: there is nothing particularly special about your brain or anyone else's when it comes to composition. They are deeply interconnected, specialized neural networks.

Tough pill to swallow, I know, but it has the advantage of being the truth. You're right: what use is there for us? We are about to get the most humbling wake up call in our entire history as a species, and it's coming faster than most realize.

We can be replaced.

2

dr_doug_exeter t1_je6w5wh wrote

isn't that already how corporations under capitalism treat people? Humans are not seen as individuals by the system but just tools/resources to be used up and rotated out when they're no longer useful. yes it is dehumanizing but place the blame where it belongs, we've been dehumanized way before computers were even a thing.

1

Andriyo t1_je8qop2 wrote

oh yeah, the machines lack "the soul" :))

1