Viewing a single comment thread. View all comments

FenixFVE t1_j6c9dqg wrote

Asimov's laws never worked. They break down on the trolley problem

10

AUFunmacy OP t1_j6ca94e wrote

I love that you brought that up, I explained this in my article but didn't use the trolley problem as an example.

I referenced self-driving cars, and talked about if they were faced with a situation where it was either hitting a pedestrian to save the passengers or recklessly evading the pedestrian and potentially killing the passengers. Either choice breaks the first law, but there is a crucial flaw in this argument. It assumes there are only two choices from the beginning.

The trolley problem is a hypothetical ultimatum which can create a paradoxical inability to make a ethical choice; in other words, the trolley problem is immutable in the number of choices or actions that can be completed. In real life there are an infinite number of choices - if we get very technical. So for example, the self-driving car might be able to produce a manoeuvre so technically perfect that it evades all danger for both the passengers and the pedestrian; maybe the AI in the, self-driving is able to see 360 degrees, able to sense human pedestrians through heat sensors, able to spot humans far far away with some other technology and make appropriate adjustments to the driving.

It is possible to accomodate to the First Law, but it requires an effort and a need to ensure the technology you create is not going to intentionally or otherwise cause the death of a human (cause being the key word). I believe it would be an effort well spent.

8