Comments

You must log in or register to comment.

awfullotofocelots t1_iql5l9o wrote

Intuitively it seems like a good portion of human intelligence IS imitation, quite literally.

68

ledow t1_iqm0iji wrote

And sadly, in practical and realistic terms, that's proven to be precisely wrong.

Inference is the key to intelligence. Knowing what's going to happen EVEN THOUGH you've never been in that situation before. Even if you have no information or prior scenarios to imitate.

When the kid runs towards the road at breakneck speed, you don't need to have watched 10,000 YouTube movies of the consequences of that, you know the potential consequences, because you can infer them from the situation. The child is unlikely to realise the danger, heed a warning, stop if you shout at them, etc. The traffic is unlikely to see the small child. There is likely to be a car when the child walks out. The driver is unlikely to be able to stop in time. You infer all that, even if you've never seen a child walk out in front of a car before.

And one of the biggest problems in AI? Inference isn't present. We can barely define it, let alone describe it, let alone do so well enough that an AI can utilise it, let alone have an AI come up with that on its own.

For decades people thought that just imitating would be enough, and that inference would somehow magically evolve from enough imitation. That has been so drastically wrong that it's held back AI for decades, providing nothing but dead ends and empty promises.

Almost all AI today is nothing more than a statistical model built either randomly, heuristically or otherwise, and basically operates on the "well, 90% of the time this answer was right, I'll go with that even though the opening conditions are completely unexperienced before", and some people STILL cling to the hope that out of that the magical statistical computer will miraculously infer things about the data that it's never been able to, never been instructed to, and never had the capacity to even describe or facilitate.

Sorry, but imitation is not key to intelligence. It's key to social cues, in things like babies, not intelligence. It's why we mirror our friendly companions when we talk, crossing our arms at the same time. It's a social signal we deliberately employ because we're intelligent, not a path to discovering intelligences.

The key is inference. The baby gets scared because dad wandered out of sight, and cries because dad has disappeared. But in time it learns to infer that dad is actually just behind the door, even though it can't see him, and even though it doesn't get shown that he's just around the door. And when it understands that, it crawls round the corner because it wants to see him.

Imitation plays no major role in intelligence, only in the con-artistry of "AI" - a sufficiently complicated gambling machine that is able to sometimes get an answer that any intelligent being could usually get right 100% of the time without having to be told how.

The first caveman that started a fire themselves demonstrated intelligence by inference. They were able to formulate a goal, plot a path to it, and infer the steps necessary in between. The second caveman that saw him and copied him wasn't intelligent. Not until he inferred that, actually, the wood wasn't dry enough and that's why it took so long, so tried with some drier, deader (inference! Nobody had told him that dead wood would be dry and work better) wood.

Inference is the major problem with AI and no AI demonstrates actual ability with it. And guesswork and statistics aren't inference. Inference is literally that spark of inspiration that makes the leap to intelligence, and is thus far unique to animal intelligence - even a crow can look at a complicated puzzle and infer what actions are necessary to make the seed they want drop out.

It's why whenever you see AI inherently reliant on huge amounts of "training data", you know that it's just going to be yet-another-"AI". We'll fix it by retraining. We'll throw more training data at it. It'll train faster in the new machine. And so on.

The same shite that's been peddled since the 60's. None of that is intelligence.

22

LinkesAuge t1_iqn2nua wrote

I feel your whole comment is years behind the current state of AI research, especially in regards to the whole "inference" angle which in itself is a rather loosely defined argument you chose to pick.

Your argument is also on shaky grounds because it would question the "intelligence" of many (average) humans, certainly in a pre-modern context.

It also doesn't answer the question where this "spark" comes from. Does a 6 month old baby do inference? A 2 year old kid, a 6 year old kid? When does this human "spark" begin? Your whole argument just shifts the whole problem of "intelligence" to a new (random) term with inference.

What we see in AI research really doesn't suggest that intelligence, inference or whatever other term you want to throw at the wall is anything else than something else than "mathematics" or "statistics".

It really is just two decades ago that it was an open question whether or not AI will even ever be able to properly write/translate random texts and that's nowadays considered to be an extremely low bar for AI, so low in fact that noone considers it an A(G)I worthy challenge anymore or as a sign of "intelligence" and currently we are on the same path with "creativity"/art.

So the goal posts will keep moving, now it's things like "inference" despite the fact that AI already covers that ground to some extent and even with extremely limited training data because it's certainly not true that AI today needs huge amounts of training data for "inference" (unsupervised training is a thing).

PS: I also think that you simply ignore the fact that in nature the "training data" is part of the DNA. There are systems/"code" built into every living being that is based on prior "experience" so it's always weird that this gets dismissed in such discussions (especially considering that evolution is literally a bruteforce statistical approach to optimization). It also ignores the millions of "inputs" every organism experiences and uses to "train" it's own "neural net".

So claiming that there is no intelligence in AI is somewhat akin to saying there is no intelligence in humans if we judged humanity by the intelligence of our infants.

14

UNODIR t1_iqofmoa wrote

Welcome to the idea that intelligence is a construct that never existed.

Those are ontological questions.

From my perspective AI cannot be achieved. It’s way more fascinating to observe how people always fall for technology and think this will make their lifes better. What does that say about culture and the society within? So much more interesting than „AI“

−4

kaffefe t1_iqog97j wrote

You're describing inference as some magical thing with "no information", but inference is all about information.

3

Urc0mp t1_iqnrsvu wrote

There might be a special sauce missing or possibly inference may be related to the immense amount of ‘training data’ that molded different species into what they are.

1

momolamomo t1_iqodh02 wrote

Would it be accurate to call inference a form of intuition

1

Bearman637 t1_iqqp4hq wrote

Inference is why infer God exists. Inference doesn't spontaneously appear. Its literally God making us in His image.

1

awfullotofocelots t1_iqm3s14 wrote

Did you misread my comment? I said "a good portion of intelligence is imitation" not "imitation is the end-all be-all key to intelligence." You cant infer new stuff if you dont understand imitation first. Christ on a cracker dude.

−5

ledow t1_iqmgk0v wrote

You said:

>Intuitively it seems like a good portion of human intelligence IS imitation, quite literally.

No it's not.

It's not for humans.

It's not for other animals.

It's not anywhere near a good portion for either, and forms almost no part of intelligence at all.

We don't have AI precisely because "intuition" like this is categorically incorrect and was posited in the 50's/60's etc. as the solution. "Just copy what the monkey does, and we'll all get smarter". It's wrong.

Imitation forms almost no part of intelligence whatsoever - small parts of social interaction, yes, but not intelligence.

>"You cant infer new stuff if you dont understand imitation first."

Yes. Yes you can. You absolutely can. In fact, that's exactly what you want in an AI. It's almost the definition of intelligence - to not just copy what other people did, but to find something new, or infer something that others never did from the same data that everyone else is looking at.

You're confusing human smart/successful traits, and learning styles, with actual intelligence. That's not what it is.

Literally your one line statement is why we don't have AI and why all the "AI" we do have isn't actually intelligent. It's what the field believed for decades, and provided excuses for when it didn't work. Because it's just "imitating" its training data responses (i.e. what you told it the answer should be) - and as soon as there's not something to imitate, it freaks out and chooses something random and entirely unreasonable and not useful, but without you knowing that's what it's doing.

Imagine a teacher who just zaps you when you get the answer right, and rewards you when you get it wrong, but only teaches in Swahili and only writes in Cyrillic and where you have no clue what they're teaching you, why or how. They just ask, zap, show you the answer, move on to another topic. How much learning do you think gets done? Because that's how current AI is "taught" - that's where the repetition/imitation is for AI. Keep zapping him until he realises this is a third order differential with a cosine and happens by chance to get the right answer. Then move onto the next question before he can recover from the surprise of not getting zapped, and repeat ad infinitum.

Even if they "imitate" the answer of the next guy, or that pattern of answers that gave them their least-zapped day, there's no intelligence occurring. Then after a decade of zapping them, you put them in a lecture hall and get them to demonstrate a solution to an equation they've never seen and have everyone just trust the answer.

AI like that - and that's most AI that exists - is just superstition, imitation and repetition. It's not intelligence, and it's why AI isn't intelligent.

5

bpopbpo t1_iqnpf48 wrote

The way we measure an AI's fitness is its ability to label/create/whatever things that are not part of the training set.

This is why nobody understands what you are on about.

1

ledow t1_it32mcs wrote

A child that identifies yet-another banana, after having been trained to do only that, isn't intelligent.

A child who gets given a plantain and isn't fooled but also realises that it's NOT a banana, having never seen a plantain before, might be intelligent.

Inference and determinations of fitness on unknown data are not entirely unrelated but are not as closely correlated as you suggest.

1

bmikey t1_iql74ol wrote

the smartest AI today working off a provided “dictionary” if you will is imo much smarter than the dumbest humans that are walking around and making babies.. i am not saying that AI is smart or even that that AI is AI so much as a randomized series of statements/replies with a little bit of logic programming..

5

berd021 t1_ir2kllo wrote

Ai isn't even as smart as our pets let alone other humans.

1

Ziinyx t1_iqnu4ku wrote

And connecting different things you can see that in poetry or analogies ...., i aslo believe that one more thing is trying to find meaning of everything, or creating meaning

1

BravoEchoEchoRomeo t1_iqlbd3a wrote

Programmer: Are you sentient?

AI Programmed to Say Yes: Yes.

Programmer: Holy shit.

26

StTheo t1_iqlrpi6 wrote

Westworld Programmer: Would you ever lie to me?

Host: No.

Westworld Programmer: Well that settles that.

4

mhornberger t1_iqmztj9 wrote

The Turing test consisted of more than merely saying yes to the question. The AI has to convince a person they're conscious. So it's more about what we're willing to call conscious than it is about the IQ of the machine.

Another metric people have used is the ability of the machine to talk its way out of a box, wiggle out from under human control. The movie Ex Machina was an interesting exploration of that idea. Being embodied in a body, with a face, that triggers protective instincts or lust or other convenient, manipulable emotions from humans would be conducive to getting out from under human control.

1

Orc_ t1_iqoh2lz wrote

The Turing Test has become a joke

1

Defiant_Swann OP t1_iql03oj wrote

The example of birds and airplanes never get old. Airplanes can’t fly in the same way as birds, but better and without possessing the exact structure of a bird. AI algorithms as complex and powerful will be so good that they become better, not indistinguishable from the human sphere.

25

inkiwitch t1_iqlafur wrote

There’s no scenario where an AI can reach human level intelligence without being able to surpass it very soon after.

We as humans couldn’t even fathom a being capable of thinking with two human brains at once, let alone something that could utilize the entire knowledge and ugly chaos of the Internet.

I don’t think we’re even capable of imagining what a truly AI creation would do on its very first day of sentience so we absolutely have no way to prepare for it.

27

kmtrp t1_iqo1zye wrote

>AI creation would do on its very first day of sentience

Sentience, intelligence, agency, etc. are all different things. One doesn't automatically give you the other.

2

kaffefe t1_iqoggt8 wrote

Not in Hollywood!

2

kmtrp t1_iqr8ymx wrote

Sometimes I wonder if I should even bother trying to make those points; it seems pointless.

1

kaffefe t1_iqrcrtx wrote

I think it needs to be said. Reddit is clueless.

1

kmtrp t1_iqrfpkv wrote

Can you imagine how the massive shitstorm will break when these models actually become widely used?
I'm really excited about the potential for intelligent chatbots to effectively end loneliness for good, but... It's going to get nuts.

Imagine Q on steroids at an industrial scale.

1

just-cuz-i t1_iqljj8b wrote

An airplane is not “better” than a bird at flying. It is better at flying in one specific way of going a very long distance very fast and carrying a lot of extra weight. Similarly, AI will far exceed human capabilities in very specific and limited ways, like how a lever or pulley helps us exceed our physical capabilities.

17

showusyourbones t1_iqm4sxe wrote

But airplanes can’t get their own fuel like birds can get food. Humans have to provide the airplanes with energy. That’s a big part of why I think people shouldn’t be afraid of AI (though I totally understand why they are).

6

Kanton_ t1_iqler2a wrote

Better by what standards or rubric?

1

HooverMaster t1_iqle4cw wrote

I don't see why it would stop there at all. We're so full of ourselves. We're taught over and over that humans aren't all that and yet an ai being AS smart as a human is mind boggling...Idk what's gonna happen though. Our mentals aren't ready for that kind of capability at all

5

MuhammedJahleen t1_iqlopu0 wrote

When have we been taught that we aren’t all that ? We have dominated this planet for hundreds of years and adapted to every single situation we have been put in

2

HooverMaster t1_iqphh4u wrote

hundreds? Much longer than that. But you die from it being a little too cold or licking something nasty on accident or needing to walk or swim a little too far. You are still very very mortal and your brain capacity is very limited. Whereas ai is not.

1

MuhammedJahleen t1_iqphz0z wrote

What makes you think that ? So far all of the intelligent ai we have seen has not been so intelligent especially to the capacity of free thought.

1

Specialist_Mind7493 t1_iqm0hue wrote

Let’s hope it doesn’t imitate it.. Would hate to see it eating Tide Pods.

4

AustinJG t1_iqlkv8o wrote

They will be massively intelligent, but not in a human way. They'll be able to solve insane problems, probably even run businesses and manage and produce resources, etc. But I don't think they'll ever be able to give you a moral philosophy or anything, or describe what an orange tastes like to them and if they like it or not.

2

[deleted] t1_iqlrplb wrote

[removed]

4

Rauleigh t1_iqmc3fy wrote

We don't have a sensor that can actually do what a nose can do, it can come close and be trained but smell is actually wildly complex and fluctuates in a way that is amazingly hard to figure out.

1

Ziinyx t1_iqntjj8 wrote

Yep, i believe that too we are programmed and the complex working of our whole body or mind can be recreated it kinds of remind me of transformers. It's exciting yet scary

1

Scope_Dog t1_iqs0g3d wrote

As far as I know, these are aspects of intelligence that aren’t even being explored wrt AI. If anyone knows of any research being done in this realm (that is, imbuing AI with the ability to taste, feel as pure sensation) please provide a link.

1

gigahydra t1_iqm3klg wrote

Interesting. Do you spend any time with a therapist? It may help.

0

Davidor714 t1_iqlrgxc wrote

This is honestly what makes true AI so scary to me. We have NO idea what it’s motivations will be or if it will even have “motivations”.

1

ajabardar1 t1_iqlsalr wrote

for me, true ai is a great tool to really understand human consciousness and intelligence. it will provide a point of comparison that we lack.

the scientific and philosophical ramifications of true ai are huge.

1

Rauleigh t1_iqmc9hx wrote

Please elaborate! :o

1

ajabardar1 t1_iqne781 wrote

a true ai will be a type of intelligence fundamentally different from human intelligence. by understanding how intelligence works in true ai we will be able to compare it to our own works. thus increasing the knowledge of both.

2

Goobamigotron t1_iqn4zbs wrote

And it will be in the hands of bad guys: governments and corporarions

2

jjdude67 t1_iqrafxi wrote

Ive been hearing this for forty years, yet i cant go through the self checkout without needing some human to intervene when the machine gets stuck

2

Remetincaa1 t1_iql1jal wrote

I believe as AI improves and surpasses human levels in many things, it will no longer be able to be considered "imitation". But the "reaching the human levels" part? Completely disagree.

1

jumpmanzero t1_iql92ji wrote

Like.. never with continuation of current technology/approach? Never in general? Or just not soon?

Why not?

2

FuturologyBot t1_iql32kd wrote

The following submission statement was provided by /u/Defiant_Swann:


The example of birds and airplanes never get old. Airplanes can’t fly in the same way as birds, but better and without possessing the exact structure of a bird. AI algorithms as complex and powerful will be so good that they become better, not indistinguishable from the human sphere.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/xskgzx/ai_will_reach_human_intelligence_not_imitate_it/iql03oj/

1

kongweeneverdie t1_iql6f5s wrote

Soon you will work for these AI. Actually, we are enslave to AI to work out all the algorithms, help them to construct their body, and feed them well with the latest technology. More and more AI will be summon into our world.

1

[deleted] t1_iqlb6oq wrote

[deleted]

1

Defiant_Swann OP t1_iqlbtvi wrote

Did AI Wright this paragraph?

1

KY_4_PREZ t1_iqlcnj7 wrote

Lmao F off buddy. Anybody lacking the forethought to see there’s exponentially more avenues for general intelligence to go wrong, opposed to actually benefiting society, have absolutely zero business speaking on the matter.

−1

TeslaPills t1_iqlfg5l wrote

Sorry buddy but you even writing this proves how little you know about AI…. AI is years away from achieving sentience. Cars can’t even drive themselves and that’s probably 0.1% of the equation we are talking developing consciousness / sentience… let’s pump the brakes….. now what you should be worried about are this AI bots unleashed with Quantum computing / quantum computers in general which is probably way more dangerous currently than any AI

1

beders t1_iqlduk9 wrote

So much fearmongering. What is going to do? Stop me from pulling a plug? Get real.

We can't write bug-free software, but somehow AI all of a sudden knows how to take over devices and propagate and such? Lol

0

TeslaPills t1_iqlfik9 wrote

Or cars driving themselves, yet AI are close to sentience

1

Pelicanliver t1_iqlhhsa wrote

If you want to have a glance at the people in my neighbourhood I would wish they would have some artificial intelligence. That would be their first taste.

1

StTheo t1_iqls8pb wrote

I wonder if we’re overhyping strong AI. Does a more complex neural network mean it’s “smarter”, or is there a point of diminishing returns? Maybe it gets more hateful or emotional, maybe it develops a mental health disorder.

1

Rauleigh t1_iqmbsef wrote

FR, IF an AI was stuck inside a computer but just as if not more intelligent than a human brain it would go absolutely nuts. It would be like life in prison or as someone's pet, solitary confinement even unless it was constantly fed new challenges to entertain it. It might get depressed, though I guess it would need the emotional hormones or an equivalent to develop an emotional response.

2

OliverSparrow t1_iqmk2m7 wrote

Obviously, otherwise ships would look like swans and aircraft like huge birds. But many think of gAI as discrete objects, boxes in dark rooms somewhere. Reality is that they will be a part of systems into which humans are embedded, aka companies and governmental organisations.

1

Cuissonbake t1_iqo7mzt wrote

Well if AI can do more than imitate us that'll be the day.

1

kerrath t1_iqob3xl wrote

I remember wondering if we’d ever see the Turing test achieved, but tbh though I never considered that the public’s intelligence would decline to the point that the average person sounds like a robot

1

momolamomo t1_iqodbzr wrote

I agree. If an AI development is capped at the engineer’s input it will remain an imitation. If it can change itself without the need for an engineer I would remark it as intelligence.

1

[deleted] t1_iqokvmj wrote

[deleted]

1

russianpotato t1_iqqg2rv wrote

AI can already anticipate our needs. Every large online online company makes billions having ai do just that...

1

[deleted] t1_iqqjcvw wrote

[deleted]

1

russianpotato t1_iqqk1dw wrote

They actually aren't trained the same way as our brains at all. Not really based on how biological brains work. They are great at their jobs. The are making a whole generation slaves to tiktok alogs.

1

[deleted] t1_iqqlmsh wrote

[deleted]

1

russianpotato t1_iqqmpl9 wrote

Tensorflow was developed by google. Not some med tech company.

1

[deleted] t1_iqqokk2 wrote

[deleted]

1

russianpotato t1_iqqz32e wrote

I'm still failing to see your point I suppose. I don't think an ai black box training algo is replicating neuronal reinforcement in quite the way you imagine. How about we say it is though, and you're 100% correct. What is your point?

1

BrightSkies42 t1_iqp9o2t wrote

If we give them equal rights and not shut them off, do they promise not to murder us?

1

StampoTheTramp t1_iqpct3u wrote

AI can only be a tool since it has no body. Try to work backwards. If you have no body, just a mind. But also no enjoyment or boredom. You would just be lobotomized and stare at the wall until someone tells you to think. That's the best a machine could do until some analouge of receptors can emulate a central nervous system.

1

oooommmmyy t1_iqsyqm7 wrote

Did someone universally define human intelligence? I think I missed that.

1

OdysseyZen t1_iql26ui wrote

Not unless we can teach it to be smarter by teaching itself

0

Express-Set-8843 t1_iql2lcl wrote

They call that machine learning and it's been a foundation of AI for a while now.

Edit: don't down vote them, it was actually impressive that they had the insight that it was a necessary component without having the background knowledge that it was already was a thing.

5

Snaz5 t1_iql6vgi wrote

And we will still enslave them to our own uses. Creating sapient machines will help no one.

0

Rauleigh t1_iqmbvvw wrote

Everyone should watch Brian David Gilberts MegaMan video, there are so many sentient robots in those games that did not need to be sentient.

1

Captain_Quidnunc t1_iqn3ftt wrote

This is simply not possible.

Any AI that approaches human intelligence will immediately exceed the capacity for human intelligence.

We would stand before it in intellect as ants stand before us. Completely unable to conceive it.

And this is already clearly displayed by our inability to understand how our current, moron versions of AI, achieve the answer to questions our most impressive minds have struggled to understand for hundreds of years.

The moment we successfully create an AI that can seek out information, modify it's memory and programming to absorb new information and possess a desire to do so...in any way approaching human capabilities for such...it will blow past the level of human intelligence like we never existed.

0

keklwords t1_iqnga3s wrote

Let’s keep running toward our own extinction. This is so much fun.

Like, global warfare and immediate climate change dangers aren’t nearly exciting enough. You know what would make the end of humanity truly epic? If we created machines that end up killing us before we can kill ourselves, even trying our hardest.

It’s like the NyQuil masturbation game from 40 year old virgin. It’s a race to see whether nuclear war, natural disasters, or human created robots can wipe us out first. Best part is, we all die no matter what.

Winning.

0