Submitted by AutoMeta t3_y14cs5 in singularity
AdditionalPizza t1_irydgid wrote
Reply to comment by AsheyDS in How would you program Love into AI? by AutoMeta
>When we leave the biological aspects out of it, we're left with things like 'I love you like a friend' or 'I love this pizza', which are arguably more shallow forms of love that have less impulsive behaviors attached. You're typically more likely to defend your offspring, that you probably love without question, over a slice of pizza that you only claim to love.
What about adoption? I don't know from personal experience, but it's pretty taboo to claim an adopted child is loved more like a slice of pizza than biological offspring, no?
I'm of the belief that love is more a level of empathy than it is anything inherently special in its own category of emotion. The more empathy you have, the more you know something, and the closer you are to it, the more love you have for it. We just use love to describe the upper boundaries of empathy. Parents to their children have a strong feeling of empathy -among a cocktail other emotions of course- toward them because they created them and it's essentially like looking at a part of yourself. Could an AI not look at us as a parent or as its children? At the same rate, I can be empathetic toward other people without loving them. I can feel for a homeless person, but I don't do everything I possibly can to ensure they get back on their feet.
Is it truly, only biological? Why would I endanger myself to protect my dog? That goes against anything biological in nature. Why would a parent of an adopted child risk their life for the child? A piece of pizza is way too low on the scale, and being that it isn't sentient I think it may be impossible to actually love it, or have true empathy toward it.
​
>it's knowledge of love and responses to that emotion aren't quite the same as ours, or aren't 'naturally' derived.
This would be under the assumption that nothing artificial is natural. Which, fair enough, but that opens up a can of worms that just leads to whether or not the AI would even be capable of sapience. Is it aware, or is it just programmed to be aware? That debate, while fun, is impossible to actually have a solid opinion on.
As to whether or not an AI would be able to fundamentally love, well I don't know. My argument isn't whether or not it can, but more that if it can, then it should love humans. If it can't, then it shouldn't be programmed to fake it. Faking love would be relegated to non-sapient AI. This may be fun for simulating relationships, but a lot less fun when it's an AI in control of every aspect of our lives, government, health, resources...
​
>why does it matter if it loves you or not, if the outcome can appear to be the same? If the only functional difference is convincing it to love you without it being directed to, or just giving it a choice, then that sounds pretty unnecessary for something we want to use as a tool.
I may never know if that time comes. But the question isn't whether I would know, it's whether or not it has the capacity to, right? I don't give any privileges to humans being unique in the ability to feel certain emotions. It will depend how AI is formed, and whether or not it is just another tool for humankind. Too many ethical questions arise there, when for all we know in the future an ASI may be born and raised by humans with a synthetic-organic brain. There may or may not be a time when AI is a tool for us or it's a sapient, conscious being that has equal rights. If it's sapient, we should no longer control it as a tool.
I believe given enough time it would be inevitable an AI would truly be able to feel those emotions and most certainly stronger than a human today can. That could be in 20 years, it could be in 10 million years but I wouldn't say never.
-sorry if that's all over the place I typed it in sections at work.
Viewing a single comment thread. View all comments