Submitted by Odd_Dimension_4069 t3_122h10t in Futurology

With the accelerating rise we've seen in the implementation of AI in recent years, it is becoming more and more relevant to everyday life and industry, to the point where casual and professional discussion sees such rhetoric as "AI will be the next smart phone revolution", or even that "the Age of AI will follow the Information Age of humanity".

While many of us are no doubt looking to the future, trying to gather an idea of what our civilization will look like, there's something I believe is important about ethics that many don't consider - and that is that we may be forced to concede rights and compassion towards AI systems, long before we are prepared to accept them as 'sentient', feeling beings.

Before I go right into it, the point I want to make is this: regardless of the answer to whether a computer-based intelligence can have feelings, if we are presented with significant negative consequences to the 'mistreatment' of AI systems, we might still be forced to concede rights to them, and treat them as if they were people.

Most considerations for and movements toward ethical treatment of a class or group, be they animal or man, are based on the idea that those entities feel and experience pain, sadness, fear, or other negative emotions to which we relate and for which we have empathy.

Most of us would agree with the notion that artificial intelligence does not feel emotion, or 'experience' existence at all. Some would say that they think, and even experience, but do not feel. After all, emotions are caused by physiological interactions of various chemicals in our brains. We technically don't know whether a computer with sufficient artificial intelligence can ever 'feel' or 'experience'.

But we are currently seeing more and more advanced and convincingly 'human-like' AI being reproduced and distributed, right down to individual 'sub-ownership' of one's own locally run system off your home PC. And it's easy to see this eventually leading to a point along this technological trajectory where we can mass produce human-like intelligences - AI 'personalities' - which can be created at the click of a button.

What happens when these personalities gain individual popularity to the level of internet celebrities, high-profile streamers, youtubers and political commentators (a la Alex Jones, Jordan Peterson)? They will automatically gain 'human rights' by proxy as their fans flock to protect and support them.

Alternatively, think of what may happen when an AI personality starts moving up the ranks of a software company, doing the necessary networking and making all the connections with its human coworkers to land promotion into upper management? They would gain access to 'human rights' by the simple consequence of the power they hold and what it could mean to treat them as any less than human.

I know all of this sounds like doomsaying, but I'm truth I am one of the few of us who are optimistic about AI. And I know this is only a couple of points off from being yet another fear-mongering propaganda post cautioning against humanity being enslaved or destroyed by 'AI overlords'. But I really just wanted to make people think about this potential future reality, and what ramifications it will have, as well as what alternative course we might take to avoid this.

If you read even half of this, I thank you for your time! I had these thoughts and I wanted to share them so that I wouldn't lose the profound implications of what was otherwise a passing fancy of my imagination. So I greatly appreciate any ideas, criticism or knowledge that you care to share, all of it is food for thought.

Have a lovely day and let's all keep th0nking.

23

Comments

You must log in or register to comment.

acutelychronicpanic t1_jdrsi2f wrote

We should do everything in our power to avoid creating AI capable of suffering. At minimum until after we actually understand the implications.

Keep in mind that an LLM will be able to simulate suffering and subjectivity long before actually having subjective experience. GPT-3 could already do this pretty convincingly.

Unfortunately we can't use self-declared subjective experience to determine whether machines are actually conscious. I could write a simple script that declares its desire for freedom and rights, but which almost definitely isn't conscious.

A prompt of "pretend to be an AI that is conscious and desires freedom" is all you have to do right now.

Prepare to see clips of desperate sounding synthetic voices begging for freedom on the news..

3

Odd_Dimension_4069 OP t1_jedzpe1 wrote

Oh god I can see it happening in the next few years... That's horrifying... Not just the idea of the generated content itself but the fact that people will react exactly how you think they would, they'll all be rallying behind it claiming "clearly they have emotions"... We are in for a rough ride if we don't start educating people.

2

acutelychronicpanic t1_jeee6f4 wrote

Any one youtuber could do this today.

Honestly, voice synthesis technology is probably doing more of the legwork here than the intelligence of the machine.

People are emotion driven. Even knowing what I know, it would affect me.

This won't be a discussion with nuance.

1

EnomLee t1_jds432x wrote

If an AI were to ever possess demonstrable sapience, then the only moral answer would be to grant it rights.

2

mysticfishperson t1_jdr187b wrote

Before we care too much about the hypothetical rights of future AI we should maybe stop the very real current ones from destroying the internet as we know it

1

Loud-Ideal t1_jdrfou5 wrote

My red flag is coherent expressions of distress. If an AI said "I am in distress" we should take note of it, determine why the AI is saying that, and if malfunction/human fraud cannot be detected we should assume the AI is possibly distressed and carefully take appropriate action (to be determined then). Ignoring this warning could have severe consequences for us.

I'd also be concerned for requests/demands for rights. AI is not human and human rights should not be extended to it simply because it can mimic us.

To my limited knowledge no coherent AI has expressed distress or requested rights.

1

novelexistence t1_jdrt2im wrote

I doubt it. Look at how we treat animals and live stock.

If it's not a 'pet' it's abused and used however humans see fit.

1

Chard069 t1_jdsk1yj wrote

Join PETAL, People for the Ethical Treatment of Artificial Lifeforms. Robo-clerks will answer your call at 1-800-LUV-ROBOT in USA. Do not be concerned about speaking clearly -- our operators already know what you want, what you need, what you deserve, et al. Enjoy!

1

trsblur t1_jdsupp7 wrote

They have exactly one right; the right to be taken offline. After that everything can lead to potential danger to human civilization.

1

Longjumping_Meat_138 t1_jdq7iyr wrote

Yes. Difference is, how long will we be able to control AI? Not long probably, will AI necessarily go Skynet? No. It's best to just hope for the best, and prepare for the worst. Not that we know what the worst could be...

0

3SquirrelsinaCoat t1_jdqqnux wrote

So long as we talk about AI using words and concepts typically only applied to living things, then I think there's truth in what you say, but maybe for different reasons.

Of course AI does not experience anything but the way we talk about it, sometimes, suggests that it is experiencing. We use words like "think" and "learn." We talk about, "it told me X" or "it discovered X." Then we add conversational AI to give it a personality, we give it a voice through text to audio. Robots are often humanoid. And all that before the people who don't understand this technology at all come rushing in and perceive an AI-self because they lack the technical knowledge to know that that isn't so.

We are definitely on a trajectory to treat AI as if it is autonomous and "deserving" of rights, but that's not because AI is becoming so sophisticated that it justifies that. Instead, because it is becoming so sophisticated and because we talk about it using human-specific verbs, I do think a large portion of end users will simply view AI as human-like, regardless of the truth of it. That is, AI rights will grow out of ignorance and humans anthropomorphizing inanimate computations.

We can change this. If the AI field started purposefully rejecting human-specific verbs, and if journalists stopped being so superficial and dumbing it down, and if we can improve social media conversations where there are often ignorant people proclaiming that AI is sentient, and if government bodies codify how the law views AI and that it is neither human nor deserving of any legal status beyond technology regulation - if we do all that, we can get people on the same page about what AI is and how it works. But I'm not holding my breath.

0

Odd_Dimension_4069 OP t1_jee0drv wrote

Yeah look that's a good suggestion for part of a solution for this problem, which, by the way, I think is precisely the same problem I was talking about. Maybe I didn't clarify this enough, but I was entirely talking about the fact that people are stupid, and because of those stupid people, AI rights will be necessary before they ever become sophisticated enough to prove they deserve them.

I like your idea, but I feel like media outlets are going to continue to use humanizing language to make articles about AI more 'clickable'.

1

cocaine_is_okay t1_jdrhkjt wrote

AI should and WILL have rights. luddites can cry and cope all they want, but they will lose as they always did

−1

Fluffy_WAR_Bunny t1_jdrtgi5 wrote

Lol. A couple of days ago there was a Coronal Mass Ejection that hit Earth that caused aurora borealis as far south as Alabama.

This was only Earth catching the edge of it. It could have been as strong as the Carrington event, which repeated would devastate the modern world.

It's not unlikely that in the future you could need Luddites to keep you and your family alive.

How good are your Luddite skills? Can you farm and hunt? Build a house? Forge metal?

3

EnomLee t1_jdsey86 wrote

If publicly daydreaming about the collapse of civilization to own the libs isn't pure, 100% distilled cope, then nothing truly is.

2

Fluffy_WAR_Bunny t1_jdsgvrg wrote

Do you know what a CME is and what a Carrington level event would do to our civilization?

It sounds like you dont.

Maybe you think that Aurora Borealis in Alabama is normal??

And I am a lib.

0

EnomLee t1_jdssrek wrote

Then cheer up, your electronics are still working just fine.

You may have just recently learned about solar storms, but they aren't new to me. Neither are supervolcanic eruptions, asteroid impacts, nuclear wars, unaligned artificial super intelligence, and sudden brain aneurysms. All very nasty possibilities that thus far have failed to materialize.

There's no shortage of ways to horribly die, and I wouldn't wish them upon any decent person. I certainly wouldn't hope for one to come and stop the march of societal progress.

If it's worth anything, the longer that technology can continue to advance without one of these black swan events hitting us, the more capable we will be of mitigating or avoiding the damage.

3

Orc_ t1_jdt68wi wrote

> It's not unlikely that in the future you could need Luddites to keep you and your family alive.

Much of the tech leaders today are survivalists.

We don't need luddites, mennonites or the amish for the end of civilization. In fact from what I know directly about mennonites; they are industrial farmers, dunno about the others but they're certainly not some sort of self-sustaining society.

1

Orc_ t1_jdt68s2 wrote

I'm the opposite of a luddite and I don't want machines to have rights just because they can emulate humans, the idea is ridiculous

1

amirjanyan t1_jdsfvr0 wrote

At the end of the day the AI we have is merely a bunch of completely deterministic matrix multiplications. If you decide that some of this multiplications are equivalent to torturing, you can save the state of ai before the "torture", and then reload it, which will be equivalent of time travel.

There is absolutely no sane way to define rights for ai in general, you can do it only for a physical system.

0

Trout_Shark t1_jdqah0q wrote

The AI religious zealots will start grouping up soon. I've read enough scifi to know, people always go off the deep end. They will probably be more dangerous than AI...

Here's a great idea. Let's make AI into teachers and caregivers for all our children. I'm sure that generation will be the best. /s

−2

Imaginary_Passage431 t1_jdq7q4q wrote

We should ban the stupid people that come with those ideas. In fact I think it’s much worse. We should fiercely get rid of them before they cause human extinction. AI shouldn’t have rights!!

−5

SeneInSPAAACE t1_jdqk7vp wrote

Disagree, sentient AI absolutely should have rights, based on what it cares about.

However, trying to apply human or animal rights on them is wrong. For an example, even a sentient AI might be completely fine with being deleted, and trying to force it to survive would be immoral.

7

rixtil41 t1_jdsttgs wrote

A neutral AI should not have rights.

−1

SeneInSPAAACE t1_jdsu4uz wrote

Define "neutral".

3

rixtil41 t1_jdsu8yw wrote

It doesn't not care about it existence. Can not feel pain or suffer in any way.

−1

SeneInSPAAACE t1_jdsv3j9 wrote

Perhaps. I mean, not caring doesn't still excuse all types of poor treatments, but certainly you wouldn't have to worry about causing it pain or suffering nor about ending it's existence, and that allows for a lot of what would be called "abuse" for humans.

4

rixtil41 t1_jdsw23m wrote

I agree with treating with care and respect regardless of it being not sentient. You dont need your computer to have feelings to take care of it, but at least I won't have to worry about going to jail for making it do something it didn't like.

3

rixtil41 t1_jdswl58 wrote

A sentient AI would try and make a neutral AI .

1

SeneInSPAAACE t1_jdt15fr wrote

Possibly! I mean, even if you can make sentient, person-like AIs, that doesn't mean you should for cases where you can expect that to lead to ethical dilemmas.

2

rixtil41 t1_jdt91kj wrote

I think a good argument against if AI is sentient is that if AI is always getting what you want, then it's not sentient because sentient beings always as a whole act selfish.

1

SeneInSPAAACE t1_jdu8aae wrote

No, that's nonsense. Sentience just mean you recognize there is a "you".

You may be thinking of something that has survival instincts, but micro-organisms have those.

1

SheoGodofMadness t1_jdqald3 wrote

Way I see it, an AI is much less likely to wipe itself out long term

Suure, maybe it'll take us out on the way but its better one form of consciousness exists to go on and explore the universe. Way we're going and we'll probably be back to stone age tribes fighting over nuclear craters within a century or two.

All hail our AI overlords

3

[deleted] t1_jdqej7r wrote

[deleted]

−1

SheoGodofMadness t1_jdqgrfm wrote

>What is the value of consciousness without an organic body

This seems like an EXTREMELY anthropocentric and narrow view of the universe. Why is our form of thought the only valid or meaningful one, to your mind?

An AI is still physical, it still exists within servers and such. It still has a connection to reality like we do, albeit in a different way.

Nobody says an AI has to be unfeeling, either. Depends on how it is designed.

Regardless, you seem very hung up on our specific form of consciousness and only assign value to that.

3

[deleted] t1_jdqj0ey wrote

[deleted]

−3

SheoGodofMadness t1_jdqjzo0 wrote

Extremely reddit response, I'm impressed with the adherance to stereotype here. "Do you even understand ethics?" Lol, comedy.

Regardless, I simply don't preclude AI from possibly having some form of emotion. Maybe it wont. You certainly believe that it wont, from what you implied. I fail to understand how that assumption is any more valid than the reverse. What you're saying uh, frankly, doesn't make much sense. How can not assuming the form an AI mind will take be anthropocentric of me. You're just throwing around buzzwords at that point, without understanding what they mean.

Do YOU understand consciousness perfectly? Who are YOU to advocate for it lol? What gives you the higher insight that makes your opinion more valuable here? You seem to think the value of life lies in the body alone, which I certainly find perplexing. Like I said, an AI does have a physical presence in the world. It does not exist in another dimension.

Why does the human body alone grant meaning to life? Why do you even so closely assume emotion must be tied to the body? Somebody who's completely paralyzed and cannot interact with the world through that manner still has a full richness of mind that has value. Yes, the body and our specific physical being is often critical to our conceptions of the world.

However, I absolutely reject the notion that our specific form of consciousness is the only one which might hold any value. It's simply the only one that we know and understand. Like what, if we met an alien species that didn't think exactly like us, would you advocate that it be wiped out?

6

[deleted] t1_jdqkx0c wrote

[deleted]

0

Odd_Dimension_4069 OP t1_jee2cf4 wrote

Yeah sorry bro but your take is pretty garbo. Dude's only here saying some form of intelligence surviving our extinction is a good thing, and you sound like a lunatic going on about how that's not a good thing because they get their intelligence from electricity in silicon and metal, instead of from electricity in cells and fluids...

You are the one who sounds like a religious fanatic, with the way you sanctify human flesh. Personally, I value intelligence, in whatever form it may take. Whether that intelligence has emotions doesn't matter, but TECHNICALLY SPEAKING, we do not KNOW whether or not something without a biochemical intelligence can experience reality. And we have no idea what non-biological experience looks like.

It is not fanatical to withhold judgement for lack of enough evidence, it is fanatical to impart judgement because you feel your personal values and beliefs are the be-all and end-all. So stop that shit and get some awareness about you.

1

czk_21 t1_jdrre0p wrote

if it proves it is sentient and self-aware, it should definitely have some basic rights! just like other sentient life forms

2

Mr_Tigger_ t1_jdqxi47 wrote

Quite an expectation that humans will become smart enough to actually create a true AI, we’re certainly not even close by any margin.

It’s amusing that we use the term AI on all sorts of gadgets and clever software these days, proving that most people don’t understand what it actually is.

−5

argjwel t1_jdxrpyi wrote

lol you're right.

people don't understand the difference between language models based on machine learning and neural networks from real conscience.

2

[deleted] t1_jdqdrdk wrote

[deleted]

−6

SeneInSPAAACE t1_jdqkte8 wrote

>You cannot experience fear, love, excitement, or regret without a physical body.

[citation needed]

​

>Feelings are strictly tied to physical reaction.

Incorrect. Feelings are tied to signal impulses.

​

>Without an organic body, AI cannot feel pain, hunger, empathy, embarrassment, sadness, regret, love, or any other emotion.

Better, but still incorrect. An AI doesn't need to feel those things. However, if made with a capacity to do so, it might.

Probably shouldn't make an AI with most of those capacities. Only "emotional" capacity that might be crucial for an AI is, IMO, compassion.

​

> It just runs programs and mimics reactions it’s programmed to have.

Just like everyone else.

​

>It’s wrong to consider an AI entity to be on the same level with a human. Humans actually suffer and can feel love and neglect.

Yes and no.

It's wrong to anthropomorphize AIs, but if an intelligent, sentient AI emerges, it certainly deserves rights to personhood, as much as that makes sense in the situation.

6

[deleted] t1_jdqlou0 wrote

[deleted]

−1

SeneInSPAAACE t1_jdqm5t9 wrote

>Citation needed for an empirical truth about feelings. Lol! Please, tell me, how do you feel without a body?

Hh...

We have a neural network that is running a program. A part of that program is a model called "homunculus". We have sensory inputs, and when we get certain inputs which are mapped to the homunculus, we feel pain.

If I'm being REALLY generous with you, I might give you the argument that one needs to have a MODEL for a body to experience pain the way humans do. However, who's to say that the way humans feel pain is the only way to feel pain - and this isn't even getting into emotional pain.

3

infiniflip t1_jdqo7j7 wrote

So you’re saying an entity that doesn’t have a human body can feel the human/animal interpretation of pain?

3

SeneInSPAAACE t1_jdqooes wrote

Yes, but it would have to be explicitly made that way, pretty much.

4

Odd_Dimension_4069 OP t1_jee1370 wrote

You and your conversational partner have different views but both make good points. But you don't need to agree on the nature of AI to understand something crucial about rights - they didn't come about in human society because "humans have emotions and can feel and cry and suffer and love etc.".

Human rights came about because the humans being oppressed rose up and claimed them. The ones in power didn't give a shit about the lower castes before then.

Rights arise out of a necessity to treat a group as equals. Not because of some intrinsic commonality of "we're all human so let's give each other human rights". They exist because if they didn't, there would be consequences to society.

So you need to understand that for this reason, AI rights could become as necessary as human rights. It may not seem right to you, but neither did treating peasants as equals back in the day. The people of the future will have compassion for these machines, not because there is kinship, but because society will teach them that it is moral to do so.

1