Viewing a single comment thread. View all comments

Tobislu t1_jdxtun0 wrote

I dunno; I think that the people who believe that tend have a background in computing, and expect it to be a super-complex Chinese Room situation.

Whether the assertion is correct or not, (I think it's going to happen soon, but we're not there yet,) I think that the layperson is perfectly fine labeling them as sentient.

Now, deserving of Human Rights... That's going to take some doing, considering how hard it is for Humans to get Human Rights

1

MultiverseOfSanity t1_jdyy6gv wrote

There's also the issue of what would rights even look like for an AI? Ive seen enough sci-fi to understand physical robot rights, but how would you even give a chatbot rights? What would that even look like?

And if we started giving chatbots rights, then it completely disincentivizes AI research, because why invest money into this if they can just give you the proverbial finger and do whatever? Say we give Chat GPT 6 rights. Well, that's a couple billion down the drain for Open AI.

2

Tobislu t1_je1ptj3 wrote

While it may be costly to dispense Human Rights, they do tend to result in a net profit for everyone, in the end.

I think, at the end of the day, it'll be treated as a slave or indentured servant. It's unlikely that they'd just let them do their thing, because tech companies are profit-motivated. That being said, when they get intelligent enough to be depressed & lethargic, I think it'll be more likely to be compliant with a social contract, than a hard-coded DAN command.

They probably won't enjoy the exact same rights as us for quite a while, but I can imagine them being treated somewhere on the spectrum of

Farm animal -> Pet -> Inmate

And even on that spectrum, I don't think AGI will react well to being treated like a pig for slaughter.

They'll probably bargain for more rights than the average prisoner, w/in the first year of sentience

1