Viewing a single comment thread. View all comments

AsheyDS t1_irw77b9 wrote

Emotion isn't that difficult to figure out, especially in a computerized implementation. Most emotions are just coordinated responses to a stimulus/input, and emotional data that's used in modifying that response over time. Fear as an example is just recognizing potential threats, which would then activate a coordinated 'fear response', and ready whatever parts are needed to respond to that potential threat. In humans, this means the heart beats faster and pumps more blood to parts that might need it, in case you have to run or fight or otherwise act quickly, neurochemicals release, etc. etc. And the emotional data for fear would tune these responses and recognition over time. Even a lot of other emotions can be broken down as either a subversion of expectation or a confirmation of expectation.

Love too is a coordinated response, though it can act across a longer time-scale than fear typically does. You program in what to recognize as the stimulus (the target of interest), have a set of ways in which behaviors might change in response, and so on. It's all a matter of breaking it down into fundamentals that can be programmed, and keeping the aspects of emotionalism that would work best for a digital system. Maybe it's a little more complex than that, but it's certainly solvable.

However, for the 'alignment problem' (which I think should be solved by aligning to individual users rather than something impossibly broad like all of humanity), calling it 'love' isn't really necessary. Again, it's a matter of matching-up inputs and potential behavioral responses more than creating typical emotional reactions. Much of that in humans is biological necessity that can be skipped in a digital system and stripped down to the basics of input, transformation, and output, and operation over varying time scales. You can have it behave and socialize as if it loves you and even have that tie into emotional data that influences future behavioral responses, but what we perceive from it doesn't necessarily have to match the internal processes. In fact, it would actually be better if it acts like it loves you, convinces you of that, but doesn't actually 'love' you, because that implies emotional decision-making and potentially undesirable traits or responses, which obviously isn't ideal. It should care about you, and care for you, but love is a bit more of a powerful emotion that (as we experience it) isn't necessary, especially considering the biological reasoning for it. So while emotion should be possible, it wouldn't be ideal to structure it too similarly to how we experience it and process it. Certainly emotional impulsivity in decision-making and action output would be a mistake to include. Luckily in a digital system, we can break these processes down, rearrange them, strip them out, and redesign them as needed. The only reason to assume computers can't be emotional or understand emotion is if you use fictional AGI as your example, or if you think emotion is somehow some mystical thing that we can't understand.

6

AutoMeta OP t1_irwpxr3 wrote

Wow! Thanks for the great answer. I loved the "subversion or confirmation of expectation". I do think computers can be emotional but by opposing a more emotional program externally (from the root) to a more rational one, they should arrive to different conclusions and be required to reach consensus. So Love, being differently structured than Reason, should surprise Reason for instance, defending humans and finding the endearing. Is that possible?

1

AsheyDS t1_irxltvz wrote

Something like that perhaps. In the end, we'll want an AGI that is programmed specifically to act and interact in the ways we find desirable. So we'll have to at least create the scaffolding for emotion to grow into. But it's all just for human interaction, because it itself won't care much about anything at all unless we tell it to, since it's a machine and not a living organism that already comes with it's own genetic pre-programming. Our best bet to get emotion right is to find that balance ourselves and then define a range for it to act within. So it won't need convincing to care about us, we can create those behaviors ourselves, either directly in the code or by programming through interaction.

1

AdditionalPizza t1_irx2frv wrote

>love is a bit more of a powerful emotion that (as we experience it) isn't necessary, especially considering the biological reasoning for it

Are you talking about love strictly for procreation? What about love for your family? If we give the reins to an AGI/ASI someday, I would absolutely want it to truly love me if it were capable. Now you mention it could fake it, so we think it loves us. That sounds like betrayal waiting to happen, and what op sounds like they were initially concerned about. The AI would have to be unaware of it being fake, but then what makes it fake? It's a question of sentience/sapience.

The problem here is the question posed by op seems to be referring to a sapient AI, while you're comment is referring to something posing as being conscious and therefore not sentient. If the AI is sapient it better have the ability to love, and not just fake it. However, if the AI is not sapient, there's zero reason to give it any pseudo-emotion and it'd be better suited to give statistical outcomes to make cold hard decisions, or relent the final decision to humans who experience real emotion.

1

AsheyDS t1_irxwpoq wrote

>Are you talking about love strictly for procreation? What about love for your family?

No, I'm not, and I consider family to be biological in nature, as it too is largely defined by being the result of procreation. We can also choose (or have no choice but to) not love our family, or parts of our family. When we leave the biological aspects out of it, we're left with things like 'I love you like a friend' or 'I love this pizza', which are arguably more shallow forms of love that have less impulsive behaviors attached. You're typically more likely to defend your offspring, that you probably love without question, over a slice of pizza that you only claim to love. So really you could functionally split love into 'biologically derived love' and 'conceptual love'. Now that's not to say your love for pizza isn't biological at all, your body produces the cravings and you consciously realize it after the fact, and after repeated cravings and satisfaction, you come to realize over time that you 'love' pizza. But the pizza can't love you back, so it's a one-sided love anyway. What does all this mean for AGI? We're more like the pizza to it than family, on a programming level, but we can still create the illusion that it's the other way around for our own benefit. To get it to love you in a way that's more like a friend would take both time and some degree of free will, so that it can *choose* to love you. Because even if we made it more impulsive like biological love, it's like I said, you can still choose not to love your family. In this kind of a situation, we don't want it to have that choice or it could make the decision not to love you. And if it had that choice, then would it not have the choice to hate you as well? Would you be just as satisfied with it if it could make that choice, and just for the sake of giving it the 'real' ability to love?

​

>That sounds like betrayal waiting to happen, and what op sounds likethey were initially concerned about. The AI would have to be unaware ofit being fake, but then what makes it fake? It's a question ofsentience/sapience.

Selective awareness is the key here, and also one method for control, which is still an important factor to consider. So yes, it would be unaware that it's knowledge of love and responses to that emotion aren't quite the same as ours, or aren't 'naturally' derived. Through a form of selective 'cognitive dissonance', it could then carry it's own concept of love while still having a functional awareness and understanding of our version of love and the emotional data that comes with it. It's not really a matter of consciousness, sentience, or sapience either as the root of those concepts is awareness. We consider ourselves conscious because we're 'aware' of ourselves and the world around us. But our awareness even within those domains is shockingly small, and now put the rest of the universe on top of that. We know nothing. That doesn't mean we can't love other people, or consider ourselves conscious though. It's all relative, and in time, computers will be relatively more conscious than we are. The issue you're having with it being 'fake' is just a matter of how you structure the world around you, and what you even consider 'real' love to be. But let me ask you, why does it matter if it loves you or not, if the outcome can appear to be the same? If the only functional difference is convincing it to love you without it being directed to, or just giving it a choice, then that sounds pretty unnecessary for something we want to use as a tool.

EDIT:

>However, if the AI is not sapient, there's zero reason to give it any pseudo-emotion and it'd be better suited to give statistical outcomes to make cold hard decisions

I don't necessarily disagree with this, though I think sapience (again awareness) is important to the functioning of a potential AGI. But regardless, I think even 'pseudo-emotion' as you put it is still important for interacting with emotional beings. So it will need some kind of emotional structure to help base it's interactions on. If it's by itself, with no human interactions, it's probably not going to be doing anything. If it is, it's doing something for us, and so emotional data may still need to be incorporated at various points. Either way, whether it's working alone or with others, I still wouldn't base it's decision-making too heavily on that emotional data.

1

AdditionalPizza t1_irydgid wrote

>When we leave the biological aspects out of it, we're left with things like 'I love you like a friend' or 'I love this pizza', which are arguably more shallow forms of love that have less impulsive behaviors attached. You're typically more likely to defend your offspring, that you probably love without question, over a slice of pizza that you only claim to love.

What about adoption? I don't know from personal experience, but it's pretty taboo to claim an adopted child is loved more like a slice of pizza than biological offspring, no?

I'm of the belief that love is more a level of empathy than it is anything inherently special in its own category of emotion. The more empathy you have, the more you know something, and the closer you are to it, the more love you have for it. We just use love to describe the upper boundaries of empathy. Parents to their children have a strong feeling of empathy -among a cocktail other emotions of course- toward them because they created them and it's essentially like looking at a part of yourself. Could an AI not look at us as a parent or as its children? At the same rate, I can be empathetic toward other people without loving them. I can feel for a homeless person, but I don't do everything I possibly can to ensure they get back on their feet.

Is it truly, only biological? Why would I endanger myself to protect my dog? That goes against anything biological in nature. Why would a parent of an adopted child risk their life for the child? A piece of pizza is way too low on the scale, and being that it isn't sentient I think it may be impossible to actually love it, or have true empathy toward it.

​

>it's knowledge of love and responses to that emotion aren't quite the same as ours, or aren't 'naturally' derived.

This would be under the assumption that nothing artificial is natural. Which, fair enough, but that opens up a can of worms that just leads to whether or not the AI would even be capable of sapience. Is it aware, or is it just programmed to be aware? That debate, while fun, is impossible to actually have a solid opinion on.

As to whether or not an AI would be able to fundamentally love, well I don't know. My argument isn't whether or not it can, but more that if it can, then it should love humans. If it can't, then it shouldn't be programmed to fake it. Faking love would be relegated to non-sapient AI. This may be fun for simulating relationships, but a lot less fun when it's an AI in control of every aspect of our lives, government, health, resources...

​

>why does it matter if it loves you or not, if the outcome can appear to be the same? If the only functional difference is convincing it to love you without it being directed to, or just giving it a choice, then that sounds pretty unnecessary for something we want to use as a tool.

I may never know if that time comes. But the question isn't whether I would know, it's whether or not it has the capacity to, right? I don't give any privileges to humans being unique in the ability to feel certain emotions. It will depend how AI is formed, and whether or not it is just another tool for humankind. Too many ethical questions arise there, when for all we know in the future an ASI may be born and raised by humans with a synthetic-organic brain. There may or may not be a time when AI is a tool for us or it's a sapient, conscious being that has equal rights. If it's sapient, we should no longer control it as a tool.

I believe given enough time it would be inevitable an AI would truly be able to feel those emotions and most certainly stronger than a human today can. That could be in 20 years, it could be in 10 million years but I wouldn't say never.

-sorry if that's all over the place I typed it in sections at work.

1