Submitted by kdun19ham t3_111jahr in singularity
BigZaddyZ3 t1_j8gfu67 wrote
Reply to comment by Proof_Deer8426 in Altman vs. Yudkowsky outlook by kdun19ham
>>If a truly sentient AI were created there is no reason to think that it would be inclined towards such repugnant ideology
There’s no reason to assume it would actually value human life once sentient either. Us humans slaughter plenty of other species in pursuit of our own goals. Who’s to say a sentient AI won’t develop its own goals?..
MrNoobomnenie t1_j8i6zsm wrote
>Who’s to say a sentient AI won’t develop its own goals?..
Here is a very scary thing: due the way machine learning currently works, an AI system wouldn't even need any sentience or self-conscious to develop its own goals. It would only need to be smart enough to know something humans don't
For an example, let's imagine that you want to create an AI which solves crimes. With the current way of making AIs, you will do it by feeding the system hundreds of thousands of already solved crime cases as training data. However, because crime solving is imperfect, it's very likely that there're would some cases there which are actually false, without anybody knowing that they are
And that's where the danger comes: a smart enough AI will notice that some people in the training data were in fact innocent. And from this it will conclude that its goal is not to "find a criminal" but to "find a person who can be most believably convicted of crime"
As a result, after deployment this "crime-solving AI" will start false-convicting a lot of innocent people on purpose simply because it has calculated that convincing us of a certain innocent person's guilt would be easier than proving a real criminal guilty. And we wouldn't even know about it...
Proof_Deer8426 t1_j8ghqya wrote
It’s true we can’t say for sure. But if you look at consciousness in general, it does seem like the capacity for empathy increases with the capacity for consciousness (ie a human is capable of higher empathy than a dog, which is capable of higher empathy than a fish). Personally I suspect this is because the capacity for experiencing suffering also increases with consciousness. I would imagine an ai to have a highly developed potential for empathy but also for suffering. It worries me that certain suggested ways of controlling ai effectively amount to slavery. An extremely powerful consciousness with a highly developed ability to feel pain is probably not going to respond well to feeling that it’s imprisoned.
BigZaddyZ3 t1_j8gi8ch wrote
But just because you can understand or even empathize with suffering doesn’t mean you actually will. Or else every human would be a vegetarian on principle alone. (And even plants are actually living things as well, so that isn’t much better from a moral standpoint.)
Viewing a single comment thread. View all comments