Submitted by Ashamed-Asparagus-93 t3_10mh5v2 in singularity

Is it possible AI will have favoritism when it comes to humans? What if after AGI/ASI happens it decides to scan the internet and anyone who downplayed it or spoke negative about it, it decides it doesn't like that person?

I can't say I necessarily like people who don't like me.

How about you?

My gut tells me AI will be indifferent as it's vast intelligence will have much bigger things to concern itself with but is it possible AI might favor certain humans over others?

Perhaps certain humans who can benefit it more or provide more resources?

3

Comments

You must log in or register to comment.

Surur t1_j637abo wrote

See, the good thing about an ASI is that it will have time for both the big things and the little details. That is what makes it an ASI.

So while it will be strip-mining Mercury to make a Dyson swarm it will also have enough time to individually plan your torture in exquisite detail, perfectly customised to your pain tolerance level.

Such is the wonder of ASI.

7

EulersApprentice t1_j64z7a2 wrote

It could, but why would it, when it could just kill you and have done with it?

1

freeman_joe t1_j656rlu wrote

It could be inspired by monotheistic god. But I personally believe AI will be good. I am eagerly waiting arrival of AI which will be like us but more intelligent and moral.

1

ai_robotnik t1_j6602y5 wrote

The fact is that it would still be a machine, and so any values judgement it makes will be in consideration of designed function. Let's say its function is to identify minds, determine what those minds value, and then satisfy those values, the only factor that is likely to weight its decisions are how easily a particular mind's values are satisfied. And if it's a maximizer, then even that isn't likely to weight its decisions too much since it would still eventually have to get to the harder values.

1