Submitted by sigul77 t3_10gxx47 in Futurology
adfjsdfjsdklfsd t1_j56lrry wrote
Reply to comment by Surur in How close are we to singularity? Data from MT says very close! by sigul77
i dont think an ai needs to "understand" anything to produce certain results
DoktoroKiu t1_j57ewz6 wrote
It has to have an understanding, but yeah it doesn't necessarily imply a someone inside who knows anything about the human condition. It has no way to have a true internalization of anything other than how languages work and what words mean.
Maybe it is the same thing as a hypothetical man who is suspended in a sensory deprivation chamber and raised exclusively through the use of text, and motivated to translate by addictive drugs for reward and pain as punishment.
You could have perfect understanding of words, but no actual idea of the mapping to external reality.
2109dobleston t1_j59mo3o wrote
The singularity requires sentience and sentience requires emotions and emotions require the physiological.
tangSweat t1_j59zg1i wrote
At what point though if an AI can understand the patterns of human emotion and replicate them perfectly, has memories of its life experiences, forms "opinions" based on the information deemed most credible and has a desire to learn and grow that we say that it is sentient? We set a far lower bar for what is considered sentient in the animal kingdom. It's a genuine philosophical question many are talking about
JorusC t1_j5d6l5w wrote
It reminds me of how people criticize AI art.
"All they do is sample other art, meld a bunch of pieces together into a new idea, and synthetize it as a new piece."
Okay. How is that any different from what we do?
2109dobleston t1_j5avx9t wrote
Sentience is the capacity to experience feelings and sensations.
tangSweat t1_j5deh0t wrote
I understand that, but feelings are just a concept of the human consciousness, they are just a byproduct of our brain trying to protect ourselves from threats back in prehistoric times. If an AGI was using a black box algorithm that we can't access or understand, then how do you differentiate between clusters of transistors or neurones firing in mysterious ways and producing different emotions. AIs like chat gpt are trained with rewards and punishment, and they are coded in a way that they improve themselves, no different really than how we evolved except at a much faster pace
[deleted] t1_j5duat0 wrote
[deleted]
2109dobleston t1_j5duhep wrote
Feelings are a biological act.
DoktoroKiu t1_j5ai7o2 wrote
I would think an AI might only need sapience, though.
noonemustknowmysecre t1_j598qt0 wrote
I think people put "understanding" (along with consciousness, awareness, and sentience) up on a pedestal because it makes them feel special. Just another example of egocentrism like how we didn't think animals communicated, or were aware, or could count, or use tools, or recreation.
Think about all the philosophical waxing and poetical contemplation that's gone into asking what it means to be truly alive! ...And then remember that gut bacteria is most certainly alive and all their drivel is more akin to asking how to enjoy the weekend.
Surur t1_j56m3cz wrote
But it has to understand everything to get perfect results.
EverythingGoodWas t1_j57zs70 wrote
No it doesn’t. We see this displayed all the time in computer vision. A yolo model or any other CV model doesn’t understand what a Dog is, it just knows what they look like based on a billion images it has seen of them. If all of a sudden some new and different breed of dog appeared people would understand it was a dog, a CV model would not.
PublicFurryAccount t1_j58ye2i wrote
This is a pretty common conflation, honestly.
I think people assume that, because computers struggled with it once, there's some deeper difficulty to language. There isn't. We've known since the 1950s that language has a pretty low entropy. So it shouldn't surprise people that text prediction is actually really really good and that the real barriers are ingesting and efficiently traversing.
ETA: arguing with people about this on Reddit does make me want to bring back my NPC Theory of AI. After all, it's possible that a Markov chain really does have a human-level understanding because the horrifying truth is that the people around you are mostly just text prediction algorithms with no real internal reality, too.
JoshuaZ1 t1_j5bryem wrote
I agree with your central point but I'm not sure when you say:
> If all of a sudden some new and different breed of dog appeared people would understand it was a dog, a CV model would not.
I'd be interested in testing this. Might be interesting to train it on dog recognition on some very big data set and deliberately leave one or two breeds out and then see how well it does.
Surur t1_j598x1z wrote
You are kind of ignoring the premise, that to get perfect results, it needs to have a perfect understanding.
If the system failed as you said, it would not have a perfect understanding.
You know, like you failed to understand the argument as you thought it was the same old argument.
LeviathanGank t1_j56o1vt wrote
but it has to understand nothing to get preferred results.
Viewing a single comment thread. View all comments