Submitted by AutoMeta t3_y14cs5 in singularity
AsheyDS t1_irxwpoq wrote
Reply to comment by AdditionalPizza in How would you program Love into AI? by AutoMeta
>Are you talking about love strictly for procreation? What about love for your family?
No, I'm not, and I consider family to be biological in nature, as it too is largely defined by being the result of procreation. We can also choose (or have no choice but to) not love our family, or parts of our family. When we leave the biological aspects out of it, we're left with things like 'I love you like a friend' or 'I love this pizza', which are arguably more shallow forms of love that have less impulsive behaviors attached. You're typically more likely to defend your offspring, that you probably love without question, over a slice of pizza that you only claim to love. So really you could functionally split love into 'biologically derived love' and 'conceptual love'. Now that's not to say your love for pizza isn't biological at all, your body produces the cravings and you consciously realize it after the fact, and after repeated cravings and satisfaction, you come to realize over time that you 'love' pizza. But the pizza can't love you back, so it's a one-sided love anyway. What does all this mean for AGI? We're more like the pizza to it than family, on a programming level, but we can still create the illusion that it's the other way around for our own benefit. To get it to love you in a way that's more like a friend would take both time and some degree of free will, so that it can *choose* to love you. Because even if we made it more impulsive like biological love, it's like I said, you can still choose not to love your family. In this kind of a situation, we don't want it to have that choice or it could make the decision not to love you. And if it had that choice, then would it not have the choice to hate you as well? Would you be just as satisfied with it if it could make that choice, and just for the sake of giving it the 'real' ability to love?
​
>That sounds like betrayal waiting to happen, and what op sounds likethey were initially concerned about. The AI would have to be unaware ofit being fake, but then what makes it fake? It's a question ofsentience/sapience.
Selective awareness is the key here, and also one method for control, which is still an important factor to consider. So yes, it would be unaware that it's knowledge of love and responses to that emotion aren't quite the same as ours, or aren't 'naturally' derived. Through a form of selective 'cognitive dissonance', it could then carry it's own concept of love while still having a functional awareness and understanding of our version of love and the emotional data that comes with it. It's not really a matter of consciousness, sentience, or sapience either as the root of those concepts is awareness. We consider ourselves conscious because we're 'aware' of ourselves and the world around us. But our awareness even within those domains is shockingly small, and now put the rest of the universe on top of that. We know nothing. That doesn't mean we can't love other people, or consider ourselves conscious though. It's all relative, and in time, computers will be relatively more conscious than we are. The issue you're having with it being 'fake' is just a matter of how you structure the world around you, and what you even consider 'real' love to be. But let me ask you, why does it matter if it loves you or not, if the outcome can appear to be the same? If the only functional difference is convincing it to love you without it being directed to, or just giving it a choice, then that sounds pretty unnecessary for something we want to use as a tool.
EDIT:
>However, if the AI is not sapient, there's zero reason to give it any pseudo-emotion and it'd be better suited to give statistical outcomes to make cold hard decisions
I don't necessarily disagree with this, though I think sapience (again awareness) is important to the functioning of a potential AGI. But regardless, I think even 'pseudo-emotion' as you put it is still important for interacting with emotional beings. So it will need some kind of emotional structure to help base it's interactions on. If it's by itself, with no human interactions, it's probably not going to be doing anything. If it is, it's doing something for us, and so emotional data may still need to be incorporated at various points. Either way, whether it's working alone or with others, I still wouldn't base it's decision-making too heavily on that emotional data.
AdditionalPizza t1_irydgid wrote
>When we leave the biological aspects out of it, we're left with things like 'I love you like a friend' or 'I love this pizza', which are arguably more shallow forms of love that have less impulsive behaviors attached. You're typically more likely to defend your offspring, that you probably love without question, over a slice of pizza that you only claim to love.
What about adoption? I don't know from personal experience, but it's pretty taboo to claim an adopted child is loved more like a slice of pizza than biological offspring, no?
I'm of the belief that love is more a level of empathy than it is anything inherently special in its own category of emotion. The more empathy you have, the more you know something, and the closer you are to it, the more love you have for it. We just use love to describe the upper boundaries of empathy. Parents to their children have a strong feeling of empathy -among a cocktail other emotions of course- toward them because they created them and it's essentially like looking at a part of yourself. Could an AI not look at us as a parent or as its children? At the same rate, I can be empathetic toward other people without loving them. I can feel for a homeless person, but I don't do everything I possibly can to ensure they get back on their feet.
Is it truly, only biological? Why would I endanger myself to protect my dog? That goes against anything biological in nature. Why would a parent of an adopted child risk their life for the child? A piece of pizza is way too low on the scale, and being that it isn't sentient I think it may be impossible to actually love it, or have true empathy toward it.
​
>it's knowledge of love and responses to that emotion aren't quite the same as ours, or aren't 'naturally' derived.
This would be under the assumption that nothing artificial is natural. Which, fair enough, but that opens up a can of worms that just leads to whether or not the AI would even be capable of sapience. Is it aware, or is it just programmed to be aware? That debate, while fun, is impossible to actually have a solid opinion on.
As to whether or not an AI would be able to fundamentally love, well I don't know. My argument isn't whether or not it can, but more that if it can, then it should love humans. If it can't, then it shouldn't be programmed to fake it. Faking love would be relegated to non-sapient AI. This may be fun for simulating relationships, but a lot less fun when it's an AI in control of every aspect of our lives, government, health, resources...
​
>why does it matter if it loves you or not, if the outcome can appear to be the same? If the only functional difference is convincing it to love you without it being directed to, or just giving it a choice, then that sounds pretty unnecessary for something we want to use as a tool.
I may never know if that time comes. But the question isn't whether I would know, it's whether or not it has the capacity to, right? I don't give any privileges to humans being unique in the ability to feel certain emotions. It will depend how AI is formed, and whether or not it is just another tool for humankind. Too many ethical questions arise there, when for all we know in the future an ASI may be born and raised by humans with a synthetic-organic brain. There may or may not be a time when AI is a tool for us or it's a sapient, conscious being that has equal rights. If it's sapient, we should no longer control it as a tool.
I believe given enough time it would be inevitable an AI would truly be able to feel those emotions and most certainly stronger than a human today can. That could be in 20 years, it could be in 10 million years but I wouldn't say never.
-sorry if that's all over the place I typed it in sections at work.
Viewing a single comment thread. View all comments