Viewing a single comment thread. View all comments

Anathos117 t1_j9ln8yj wrote

It's not just interesting and worthy of study, it calls into question the entire utility of thought experiments. Which is the point of the article, although it does a strangely poor job of explaining why it's important.

If thought experiments are extremely sensitive to framing and demographic variation, then whatever conclusions we reach using them aren't generalizable. That is to say, if we get different answers to the Trolley problem depending on which generation we ask, then we're definitely going to get different answers if we change the trolley into a car, let alone a bigger change like a bullet, explosion, or disease.

And this is something of a general problem with argument by analogy, which is basically what thought experiments are. The conclusions you reach with an analogy often don't generalize to the thing you're drawing a comparison to. They differ enough that you can almost always generate an equally appropriate analogy that reaches the opposite conclusion.

7

PancAshAsh t1_j9lt1i0 wrote

This is only a problem if you consider ethics and morality to be absolute laws that never change. Of course the responses to thought experiments change over time and across cultures, human thought isn't governed by static and unchanging laws. That's sort of the point. Likewise changing the framing can give some insight in how people think and how that can change.

7

Anathos117 t1_j9lukbq wrote

> This is only a problem if you consider ethics and morality to be absolute laws that never change.

No, it's a problem if you want to create generally applicable rules or convince people that something is right or wrong. What does the Trolley problem tell us about the ethics of killing people to harvest their organs for lifesaving transplants? Nothing, because despite the fact that you're choosing between killing one person and letting several die, they don't engage our moral intuitions the same way.

Edit: Thought about this a little more, and it's easier to make my point if we reverse the Trolley Problem. Would you pull the lever to switch the trolley from the track with one person to the track with five? Obviously not, that would be monstrous. So we can generalize a rule that reads something like "it's wrong to take an action that you know will increase the number of deaths", right?

So is it wrong to save the life of an organ donor? I think the answer is just as obviously "no". The Trolley Problem has completely failed to generalize.

So what good is the Trolley Problem if it only lets us examine our moral intuitions about scenarios that literally involve choosing which people tied to a track should die. That's not something that anyone is going to encounter.

3

XiphosAletheria t1_j9lzdxm wrote

I think the response there is that the apparent lack of generalizability means only that you have failed to analyze the situation correctly. What the trolley problem teaches us is that those running a closed system should run it so as to minimize the loss of life within it. That is, if I am entering into a transit system, and a trolley problemish situation arise in it, I should rationally want the people running the system to flip levers and push buttons such that fewer people die, because I am statistically more likely to be one of the five than the one.

Whereas we shouldn't want people using others as means to an end in an open scenario. Again, because the number of people who might want an organ from me at any given moment is really much higher than my odds of needing one myself.

In both cases, the trolley problem shows is that our moral impulses are rooted in rational self-interest, rather than, say, simple utilitarianism.

3

ulookingatme t1_j9n9itp wrote

As an example, the psychopath agrees to be moral not out of a sense of need or community, but as a result of his own self interest and his or her desire to avoid the cost of ignoring laws and social norms. But does that then mean morality involves nothing more than making a self-interested choice?

1

XiphosAletheria t1_j9qinie wrote

I think of morality as being a complex system emerging from the interplay between the demands of individual self-interest and societal self-interest.

The parts of morality that emerge from individual self-interest are mostly fixed and not very controversial, based on common human desires - I would prefer not to be robbed, raped, or killed, and enough other people share those preferences that we can make moral rules against them and generally enforce them.

The parts of morality that arise from societal self-interest are more highly variarble, since what is good for a given society is very context dependent, and more controversial, since what is good for one part of society may be bad for another. In Aztec culture, human sacrifice was morally permissible, and even required, because it was a way of putting an end to tribal conflicts (the leader of the losing tribe would be executed, but in a way viewed as bringing them great honor, minimizing the chances of relatives seeking vengeance). In the American South, slavery used to be moral acceptable (because their plantation-based economy really benefited from it) whereas it was morally reprehensible in the North (because their industrialized economy required workers with levels of skill and education incompatible with slavery). Even with modern America, you see vast difference in moral views over guns, falling out along geographic lines (in rural areas gun ownership is fine, because guns are useful tools; whereas in urban areas gun ownership is suspect, because there's not much use for them except as weapons used against other people).

2

ulookingatme t1_j9qxy67 wrote

Sure, morals are based upon the social contract and self-interest. That's what I basically said.

1

Anathos117 t1_j9m1f2i wrote

> What the trolley problem teaches us is that those running a closed system should run it so as to minimize the loss of life within it.

Maybe, but that's absolutely not what people are using the Trolley Problem for, and we don't really need the Trolley Problem to reach that conclusion in the first place. The point of thought experiments is to isolate the moral dilemma from details that might distract from the core intuition, but that's worse than useless because those details aren't distractions, they're profoundly important.

0

XiphosAletheria t1_j9m3q8e wrote

I think the point of the thought experiment is to help people discover what their intuitions are, what the reasoning is behind them, and where that leads to contradictions. What's important about the trolley problem isn't that people say you should flip the lever. It's that when asked "why?" the answer is almost always "because it is better to save five lives than one". But then when it comes to pushing the fat man or cutting someone up for organs, they say you shouldn't do it, even though the math is the same. At which point people have to work to resolve the contradiction. There's a bunch of ways to do it, but hashing out which one you prefer is absolutely worthwhile and teaches you about yourself.

4

Anathos117 t1_j9m67db wrote

> There's a bunch of ways to do it, but hashing out which one you prefer is absolutely worthwhile and teaches you about yourself.

But again, it doesn't teach you anything generalizable. Someone who might balk at pushing the fat man might have no problem demanding a pre-vaccine end to COVID restrictions for economic reasons. So it might be intellectually stimulating, but not actually useful.

1

XiphosAletheria t1_j9n2j56 wrote

I think my main issue here is that I don't think "generalizable" is the same as "useful". I think learning to articulate your moral assumptions, then to interrogate them and resolve any contradictions as they arise are all useful, and really the whole point of philosophy.

Beyond that, I think a lot of the factors people come up with are in fact generalizable, at least for them. That is, once people have resolved the trolley problem to their own satisfaction, the factors they have identified as morally relevant will remain relevant across a range of issues. The trolley problem doesn't reveal much that is generalizable for people as a group, but because morality is inherently subjective, we wouldn't really expect it to.

1

Anathos117 t1_j9n50m4 wrote

> I think learning to articulate your moral assumptions, then to interrogate them and resolve any contradictions as they arise are all useful, and really the whole point of philosophy.

Again, not what most people are using thought experiments for, and "it's good practice for when you actually have to make a moral judgement about something completely unrelated" is hardly a ringing endorsement for their usefulness.

> the factors they have identified as morally relevant will remain relevant across a range of issues

I don't think they will be. People are weird, inconsistent, and illogical. You don't have some smooth culpability function for wrongdoing that justifies punishment once it rises above a certain threshold, you've got an arbitrary collection of competing criteria that includes morally irrelevant details like how well you slept last night and how long it's been since you last ate.

1