Submitted by AutoMeta t3_y14cs5 in singularity
AutoMeta OP t1_irwpxr3 wrote
Reply to comment by AsheyDS in How would you program Love into AI? by AutoMeta
Wow! Thanks for the great answer. I loved the "subversion or confirmation of expectation". I do think computers can be emotional but by opposing a more emotional program externally (from the root) to a more rational one, they should arrive to different conclusions and be required to reach consensus. So Love, being differently structured than Reason, should surprise Reason for instance, defending humans and finding the endearing. Is that possible?
AsheyDS t1_irxltvz wrote
Something like that perhaps. In the end, we'll want an AGI that is programmed specifically to act and interact in the ways we find desirable. So we'll have to at least create the scaffolding for emotion to grow into. But it's all just for human interaction, because it itself won't care much about anything at all unless we tell it to, since it's a machine and not a living organism that already comes with it's own genetic pre-programming. Our best bet to get emotion right is to find that balance ourselves and then define a range for it to act within. So it won't need convincing to care about us, we can create those behaviors ourselves, either directly in the code or by programming through interaction.
Viewing a single comment thread. View all comments