Viewing a single comment thread. View all comments

phriot t1_je6xyk2 wrote

But once you learn why 2 + 2 = 4, it's going to be hard to convince you that the solution is really 5. Right now, LLMs have rote learning, and maybe some ability to do synthesis. They don't have the ability as of now to actually reason out an answer from first principles.

14

Good-AI t1_je71baq wrote

Rote learning can still get you there. Because as you compress statistics and brute knowledge into smaller and smaller sizes, understanding needs to emerge.

For example, a LLM can memorize that 1+1=2, 1+2=3, 1+3=4,.... Until infinity. Then 2+1=3, 2+2=4,... Etc. But that results in a lot of data. So if the neural network is forced to condense that data, and keep the same knowledge about the world, it starts to understand.

It realizes that by just understanding why 1+1=2, all possible combinations are covered. By understanding addition. That compresses all infinife possibilities of additions into one package of data. This is what is going to happen with LLM and what chief scientist of Open AI said is already starting to happen. Source.

11

Isomorphic_reasoning t1_je71vlt wrote

> Rote learning, and maybe some ability to do synthesis. They don't have the ability as of now to actually reason out an answer from first principles.

Sounds like 80% of people

9

BigMemeKing t1_je74m5d wrote

Not really. Why does 2+2=4? The first question I would as is. What are we trying to solve for? I have 2 pennies, I get 2 more pennies, now I have 4 pennies. Now, we could add variables to this. One of the pennies has a big hole in it, making it invalid currency. So while yes, you do technically have 4 pennies, in our current dimension, you only have 3. Since one is in all form and function, garbage.

Now, let's say one of those pennies has special attributes that could make it worth more. While you may now have 4 pennies, one of these pennies is worth 25 pennies. So, while technically you only have four pennies, your net result in our current dimension you now have a total of 28 pennies. 2+2 only equals 4 in a 1 dimensional space. The more dimensions you add to an equation, the more complicated the formula/format becomes.

−1

phriot t1_je779ga wrote

But if you feed an LLM enough input data where "5 apples" follows "Adding 2 apples to an existing two apples gets you...," it's pretty likely to tell you that if Johnny has two apples and Sally has two apples, together they have 5 apples. This is true even if it can also tell you all about counting and discrete math. That's the point here.

2

Quentin__Tarantulino t1_je8l6u6 wrote

If you feed that information into a human brain enough times and from enough sources, they will absolutely believe it too. Humans believe all sorts of dumb things that are objectively false. I don’t think your argument refutes OP.

Once AI has other sensory inputs from the real world, it’s intelligence is basically equal to that of biological creatures. The difference is that right now it can’t see, hear, or touch. Once it’s receiving and incorporating those inputs, as well as way more raw data than a human can process, not only will it be intelligent, it will be orders of magnitude more intelligent than the smartest human in history.

2

Superschlenz t1_je84822 wrote

>Why does 2+2=4?

Because someone defined the digit symbols and their order to be 1 2 3 4 5. If they had defined đ 2 § Π Ø instead, then 2+2=Π.

2