Submitted by Rumianti6 t3_y0hs5u in singularity
Angry_Grandpa_ t1_irtx9bz wrote
I think because we're conscious we want to replicate it. However, self-referential intention is probably not going to happen (and if it does it is probably a long, long way down the road) and for most things we don't want it to happen.
I don't want the AI taking my order at Taco Bell to be self-aware and have a life goal. I just want it to get my order right 99.9999% of the time. And for most jobs that AI will displace that will also be the case.
However, AI will likely be able to fool a lot of people into believing they're conscious based on extremely large datasets of answers and so for some it won't matter one way or the other. It will seem conscious -- even though it will not have an inner life or intentional goals.
I think a lot of introverted loners will enjoy and benefit from conversations with AI agents even if they know deep down it's not a conscious entity. And it won't have the side effects of anti-depressants.
Viewing a single comment thread. View all comments