Submitted by FrogsEverywhere t3_zgfou6 in Futurology
Taron221 t1_izhitmt wrote
Reply to comment by Drakolyik in The technological singularity is happening (oc/opinion) by FrogsEverywhere
I think it's easy to sidestep the importance of emotions in consciousness because it's sort of a cliché in fiction.
Unsolicited curiosity, personal preferences, trivial secrets, want for recognition, hope for betterment, desire to learn, reflective anxiety, worry for others, and ambition that goes beyond self-preservation. These are all some things we would deem signs of consciousness, yet they all require an emotion. If you took away every single emotion and sentiment a person could feel, they'd probably die of thirst or neglect eventually.
Mimicry would be convincing, but it wouldn't be consciousness--it would just be software pretending it had emotions. Emotions and memories are probably the big two for identity & sentience, while levels of sapience come with intelligence.
geroldf t1_izicv5w wrote
Programming emotions into an AI is easy.
Taron221 t1_izighro wrote
There are some researchers who have attempted to program AI systems to simulate emotions or respond to human emotional cues---Marcel Just, Rana el Kaliouby, and Rosalind Picard, to name a few.
They have had some success, but emotions, as we comprehend them, involve a complex interplay between the brain, the body, and various hormones and chemicals. It is difficult to quantify if what the researchers are doing is imparting emotions, teaching cues, or, as u/Drakolyik said, simply programming a type of mimicry. Emotions are not fully understood by science.
But, in all likelihood, an AI that is programmed to simulate emotions is not experiencing them in the manner that humans do. That comes with the risk that it might behave in unpredictable, erratic, or harmful ways down the line.
Because of this, some argue that if you really wanted a True AI, a simulated human brain might be safer than a programmed AI. By simulating the structure/function of the human brain, it may be possible to create an AI that is capable of adaptive behavior without needing to program it to behave in certain ways. But that might make it more complex and difficult to understand or manage.
Handydn t1_izimcuz wrote
I also think there won't be a True AI until we fully understand how human brain works on a cellular, if not molecular, level. The current neuroscience research is not advanced enough to address these yet. Could AI in turn help with neuroscience research? I don't know
geroldf t1_izqpqoa wrote
Emotions are just different states of the machine where different computational priorities are emphasized.
For example in the fear state the emphasis is on safety and escape from danger. In anger it’s on attack. To implement them the weights are changed along the decision tree.
Taron221 t1_izv2z7k wrote
Those are purely reactionary definitions of fear and anger, though. Emotions come with a reward/punishment for decisions (guilt, sorrow, shame, embarrassment, etc.). Dopamine and other chemical releases are our reward and punishment whilst genetics & experience are our regulators of the amounts we get for every action. You could probably program a sort of self-calibrating regulator of reactions, which might give a sense of personality, but you can't reward or punish them in the manner you would biological beings.
geroldf t1_izvscii wrote
Everything is easy once you know how. We won’t be limited to our current state of ignorance forever.
[deleted] t1_izilmzn wrote
[deleted]
Viewing a single comment thread. View all comments