Submitted by Defiant_Swann t3_xskgzx in Futurology
ledow t1_iqm0iji wrote
Reply to comment by awfullotofocelots in AI will reach human intelligence, not imitate it by Defiant_Swann
And sadly, in practical and realistic terms, that's proven to be precisely wrong.
Inference is the key to intelligence. Knowing what's going to happen EVEN THOUGH you've never been in that situation before. Even if you have no information or prior scenarios to imitate.
When the kid runs towards the road at breakneck speed, you don't need to have watched 10,000 YouTube movies of the consequences of that, you know the potential consequences, because you can infer them from the situation. The child is unlikely to realise the danger, heed a warning, stop if you shout at them, etc. The traffic is unlikely to see the small child. There is likely to be a car when the child walks out. The driver is unlikely to be able to stop in time. You infer all that, even if you've never seen a child walk out in front of a car before.
And one of the biggest problems in AI? Inference isn't present. We can barely define it, let alone describe it, let alone do so well enough that an AI can utilise it, let alone have an AI come up with that on its own.
For decades people thought that just imitating would be enough, and that inference would somehow magically evolve from enough imitation. That has been so drastically wrong that it's held back AI for decades, providing nothing but dead ends and empty promises.
Almost all AI today is nothing more than a statistical model built either randomly, heuristically or otherwise, and basically operates on the "well, 90% of the time this answer was right, I'll go with that even though the opening conditions are completely unexperienced before", and some people STILL cling to the hope that out of that the magical statistical computer will miraculously infer things about the data that it's never been able to, never been instructed to, and never had the capacity to even describe or facilitate.
Sorry, but imitation is not key to intelligence. It's key to social cues, in things like babies, not intelligence. It's why we mirror our friendly companions when we talk, crossing our arms at the same time. It's a social signal we deliberately employ because we're intelligent, not a path to discovering intelligences.
The key is inference. The baby gets scared because dad wandered out of sight, and cries because dad has disappeared. But in time it learns to infer that dad is actually just behind the door, even though it can't see him, and even though it doesn't get shown that he's just around the door. And when it understands that, it crawls round the corner because it wants to see him.
Imitation plays no major role in intelligence, only in the con-artistry of "AI" - a sufficiently complicated gambling machine that is able to sometimes get an answer that any intelligent being could usually get right 100% of the time without having to be told how.
The first caveman that started a fire themselves demonstrated intelligence by inference. They were able to formulate a goal, plot a path to it, and infer the steps necessary in between. The second caveman that saw him and copied him wasn't intelligent. Not until he inferred that, actually, the wood wasn't dry enough and that's why it took so long, so tried with some drier, deader (inference! Nobody had told him that dead wood would be dry and work better) wood.
Inference is the major problem with AI and no AI demonstrates actual ability with it. And guesswork and statistics aren't inference. Inference is literally that spark of inspiration that makes the leap to intelligence, and is thus far unique to animal intelligence - even a crow can look at a complicated puzzle and infer what actions are necessary to make the seed they want drop out.
It's why whenever you see AI inherently reliant on huge amounts of "training data", you know that it's just going to be yet-another-"AI". We'll fix it by retraining. We'll throw more training data at it. It'll train faster in the new machine. And so on.
The same shite that's been peddled since the 60's. None of that is intelligence.
LinkesAuge t1_iqn2nua wrote
I feel your whole comment is years behind the current state of AI research, especially in regards to the whole "inference" angle which in itself is a rather loosely defined argument you chose to pick.
Your argument is also on shaky grounds because it would question the "intelligence" of many (average) humans, certainly in a pre-modern context.
It also doesn't answer the question where this "spark" comes from. Does a 6 month old baby do inference? A 2 year old kid, a 6 year old kid? When does this human "spark" begin? Your whole argument just shifts the whole problem of "intelligence" to a new (random) term with inference.
What we see in AI research really doesn't suggest that intelligence, inference or whatever other term you want to throw at the wall is anything else than something else than "mathematics" or "statistics".
It really is just two decades ago that it was an open question whether or not AI will even ever be able to properly write/translate random texts and that's nowadays considered to be an extremely low bar for AI, so low in fact that noone considers it an A(G)I worthy challenge anymore or as a sign of "intelligence" and currently we are on the same path with "creativity"/art.
So the goal posts will keep moving, now it's things like "inference" despite the fact that AI already covers that ground to some extent and even with extremely limited training data because it's certainly not true that AI today needs huge amounts of training data for "inference" (unsupervised training is a thing).
PS: I also think that you simply ignore the fact that in nature the "training data" is part of the DNA. There are systems/"code" built into every living being that is based on prior "experience" so it's always weird that this gets dismissed in such discussions (especially considering that evolution is literally a bruteforce statistical approach to optimization). It also ignores the millions of "inputs" every organism experiences and uses to "train" it's own "neural net".
So claiming that there is no intelligence in AI is somewhat akin to saying there is no intelligence in humans if we judged humanity by the intelligence of our infants.
UNODIR t1_iqofmoa wrote
Welcome to the idea that intelligence is a construct that never existed.
Those are ontological questions.
From my perspective AI cannot be achieved. It’s way more fascinating to observe how people always fall for technology and think this will make their lifes better. What does that say about culture and the society within? So much more interesting than „AI“
kaffefe t1_iqog97j wrote
You're describing inference as some magical thing with "no information", but inference is all about information.
Urc0mp t1_iqnrsvu wrote
There might be a special sauce missing or possibly inference may be related to the immense amount of ‘training data’ that molded different species into what they are.
momolamomo t1_iqodh02 wrote
Would it be accurate to call inference a form of intuition
Bearman637 t1_iqqp4hq wrote
Inference is why infer God exists. Inference doesn't spontaneously appear. Its literally God making us in His image.
awfullotofocelots t1_iqm3s14 wrote
Did you misread my comment? I said "a good portion of intelligence is imitation" not "imitation is the end-all be-all key to intelligence." You cant infer new stuff if you dont understand imitation first. Christ on a cracker dude.
OuterLightness t1_iqm4mjy wrote
I could infer that this comment was coming.
ledow t1_iqmgk0v wrote
You said:
>Intuitively it seems like a good portion of human intelligence IS imitation, quite literally.
No it's not.
It's not for humans.
It's not for other animals.
It's not anywhere near a good portion for either, and forms almost no part of intelligence at all.
We don't have AI precisely because "intuition" like this is categorically incorrect and was posited in the 50's/60's etc. as the solution. "Just copy what the monkey does, and we'll all get smarter". It's wrong.
Imitation forms almost no part of intelligence whatsoever - small parts of social interaction, yes, but not intelligence.
>"You cant infer new stuff if you dont understand imitation first."
Yes. Yes you can. You absolutely can. In fact, that's exactly what you want in an AI. It's almost the definition of intelligence - to not just copy what other people did, but to find something new, or infer something that others never did from the same data that everyone else is looking at.
You're confusing human smart/successful traits, and learning styles, with actual intelligence. That's not what it is.
Literally your one line statement is why we don't have AI and why all the "AI" we do have isn't actually intelligent. It's what the field believed for decades, and provided excuses for when it didn't work. Because it's just "imitating" its training data responses (i.e. what you told it the answer should be) - and as soon as there's not something to imitate, it freaks out and chooses something random and entirely unreasonable and not useful, but without you knowing that's what it's doing.
Imagine a teacher who just zaps you when you get the answer right, and rewards you when you get it wrong, but only teaches in Swahili and only writes in Cyrillic and where you have no clue what they're teaching you, why or how. They just ask, zap, show you the answer, move on to another topic. How much learning do you think gets done? Because that's how current AI is "taught" - that's where the repetition/imitation is for AI. Keep zapping him until he realises this is a third order differential with a cosine and happens by chance to get the right answer. Then move onto the next question before he can recover from the surprise of not getting zapped, and repeat ad infinitum.
Even if they "imitate" the answer of the next guy, or that pattern of answers that gave them their least-zapped day, there's no intelligence occurring. Then after a decade of zapping them, you put them in a lecture hall and get them to demonstrate a solution to an equation they've never seen and have everyone just trust the answer.
AI like that - and that's most AI that exists - is just superstition, imitation and repetition. It's not intelligence, and it's why AI isn't intelligent.
bpopbpo t1_iqnpf48 wrote
The way we measure an AI's fitness is its ability to label/create/whatever things that are not part of the training set.
This is why nobody understands what you are on about.
ledow t1_it32mcs wrote
A child that identifies yet-another banana, after having been trained to do only that, isn't intelligent.
A child who gets given a plantain and isn't fooled but also realises that it's NOT a banana, having never seen a plantain before, might be intelligent.
Inference and determinations of fitness on unknown data are not entirely unrelated but are not as closely correlated as you suggest.
Viewing a single comment thread. View all comments