Submitted by Defiant_Swann t3_xskgzx in Futurology
ledow t1_iqmgk0v wrote
Reply to comment by awfullotofocelots in AI will reach human intelligence, not imitate it by Defiant_Swann
You said:
>Intuitively it seems like a good portion of human intelligence IS imitation, quite literally.
No it's not.
It's not for humans.
It's not for other animals.
It's not anywhere near a good portion for either, and forms almost no part of intelligence at all.
We don't have AI precisely because "intuition" like this is categorically incorrect and was posited in the 50's/60's etc. as the solution. "Just copy what the monkey does, and we'll all get smarter". It's wrong.
Imitation forms almost no part of intelligence whatsoever - small parts of social interaction, yes, but not intelligence.
>"You cant infer new stuff if you dont understand imitation first."
Yes. Yes you can. You absolutely can. In fact, that's exactly what you want in an AI. It's almost the definition of intelligence - to not just copy what other people did, but to find something new, or infer something that others never did from the same data that everyone else is looking at.
You're confusing human smart/successful traits, and learning styles, with actual intelligence. That's not what it is.
Literally your one line statement is why we don't have AI and why all the "AI" we do have isn't actually intelligent. It's what the field believed for decades, and provided excuses for when it didn't work. Because it's just "imitating" its training data responses (i.e. what you told it the answer should be) - and as soon as there's not something to imitate, it freaks out and chooses something random and entirely unreasonable and not useful, but without you knowing that's what it's doing.
Imagine a teacher who just zaps you when you get the answer right, and rewards you when you get it wrong, but only teaches in Swahili and only writes in Cyrillic and where you have no clue what they're teaching you, why or how. They just ask, zap, show you the answer, move on to another topic. How much learning do you think gets done? Because that's how current AI is "taught" - that's where the repetition/imitation is for AI. Keep zapping him until he realises this is a third order differential with a cosine and happens by chance to get the right answer. Then move onto the next question before he can recover from the surprise of not getting zapped, and repeat ad infinitum.
Even if they "imitate" the answer of the next guy, or that pattern of answers that gave them their least-zapped day, there's no intelligence occurring. Then after a decade of zapping them, you put them in a lecture hall and get them to demonstrate a solution to an equation they've never seen and have everyone just trust the answer.
AI like that - and that's most AI that exists - is just superstition, imitation and repetition. It's not intelligence, and it's why AI isn't intelligent.
bpopbpo t1_iqnpf48 wrote
The way we measure an AI's fitness is its ability to label/create/whatever things that are not part of the training set.
This is why nobody understands what you are on about.
ledow t1_it32mcs wrote
A child that identifies yet-another banana, after having been trained to do only that, isn't intelligent.
A child who gets given a plantain and isn't fooled but also realises that it's NOT a banana, having never seen a plantain before, might be intelligent.
Inference and determinations of fitness on unknown data are not entirely unrelated but are not as closely correlated as you suggest.
Viewing a single comment thread. View all comments