Submitted by Odd_Dimension_4069 t3_122h10t in Futurology
With the accelerating rise we've seen in the implementation of AI in recent years, it is becoming more and more relevant to everyday life and industry, to the point where casual and professional discussion sees such rhetoric as "AI will be the next smart phone revolution", or even that "the Age of AI will follow the Information Age of humanity".
While many of us are no doubt looking to the future, trying to gather an idea of what our civilization will look like, there's something I believe is important about ethics that many don't consider - and that is that we may be forced to concede rights and compassion towards AI systems, long before we are prepared to accept them as 'sentient', feeling beings.
Before I go right into it, the point I want to make is this: regardless of the answer to whether a computer-based intelligence can have feelings, if we are presented with significant negative consequences to the 'mistreatment' of AI systems, we might still be forced to concede rights to them, and treat them as if they were people.
Most considerations for and movements toward ethical treatment of a class or group, be they animal or man, are based on the idea that those entities feel and experience pain, sadness, fear, or other negative emotions to which we relate and for which we have empathy.
Most of us would agree with the notion that artificial intelligence does not feel emotion, or 'experience' existence at all. Some would say that they think, and even experience, but do not feel. After all, emotions are caused by physiological interactions of various chemicals in our brains. We technically don't know whether a computer with sufficient artificial intelligence can ever 'feel' or 'experience'.
But we are currently seeing more and more advanced and convincingly 'human-like' AI being reproduced and distributed, right down to individual 'sub-ownership' of one's own locally run system off your home PC. And it's easy to see this eventually leading to a point along this technological trajectory where we can mass produce human-like intelligences - AI 'personalities' - which can be created at the click of a button.
What happens when these personalities gain individual popularity to the level of internet celebrities, high-profile streamers, youtubers and political commentators (a la Alex Jones, Jordan Peterson)? They will automatically gain 'human rights' by proxy as their fans flock to protect and support them.
Alternatively, think of what may happen when an AI personality starts moving up the ranks of a software company, doing the necessary networking and making all the connections with its human coworkers to land promotion into upper management? They would gain access to 'human rights' by the simple consequence of the power they hold and what it could mean to treat them as any less than human.
I know all of this sounds like doomsaying, but I'm truth I am one of the few of us who are optimistic about AI. And I know this is only a couple of points off from being yet another fear-mongering propaganda post cautioning against humanity being enslaved or destroyed by 'AI overlords'. But I really just wanted to make people think about this potential future reality, and what ramifications it will have, as well as what alternative course we might take to avoid this.
If you read even half of this, I thank you for your time! I had these thoughts and I wanted to share them so that I wouldn't lose the profound implications of what was otherwise a passing fancy of my imagination. So I greatly appreciate any ideas, criticism or knowledge that you care to share, all of it is food for thought.
Have a lovely day and let's all keep th0nking.
[deleted] t1_jdqbafw wrote
[removed]