Viewing a single comment thread. View all comments

XiphosAletheria t1_j9lzdxm wrote

I think the response there is that the apparent lack of generalizability means only that you have failed to analyze the situation correctly. What the trolley problem teaches us is that those running a closed system should run it so as to minimize the loss of life within it. That is, if I am entering into a transit system, and a trolley problemish situation arise in it, I should rationally want the people running the system to flip levers and push buttons such that fewer people die, because I am statistically more likely to be one of the five than the one.

Whereas we shouldn't want people using others as means to an end in an open scenario. Again, because the number of people who might want an organ from me at any given moment is really much higher than my odds of needing one myself.

In both cases, the trolley problem shows is that our moral impulses are rooted in rational self-interest, rather than, say, simple utilitarianism.

3

ulookingatme t1_j9n9itp wrote

As an example, the psychopath agrees to be moral not out of a sense of need or community, but as a result of his own self interest and his or her desire to avoid the cost of ignoring laws and social norms. But does that then mean morality involves nothing more than making a self-interested choice?

1

XiphosAletheria t1_j9qinie wrote

I think of morality as being a complex system emerging from the interplay between the demands of individual self-interest and societal self-interest.

The parts of morality that emerge from individual self-interest are mostly fixed and not very controversial, based on common human desires - I would prefer not to be robbed, raped, or killed, and enough other people share those preferences that we can make moral rules against them and generally enforce them.

The parts of morality that arise from societal self-interest are more highly variarble, since what is good for a given society is very context dependent, and more controversial, since what is good for one part of society may be bad for another. In Aztec culture, human sacrifice was morally permissible, and even required, because it was a way of putting an end to tribal conflicts (the leader of the losing tribe would be executed, but in a way viewed as bringing them great honor, minimizing the chances of relatives seeking vengeance). In the American South, slavery used to be moral acceptable (because their plantation-based economy really benefited from it) whereas it was morally reprehensible in the North (because their industrialized economy required workers with levels of skill and education incompatible with slavery). Even with modern America, you see vast difference in moral views over guns, falling out along geographic lines (in rural areas gun ownership is fine, because guns are useful tools; whereas in urban areas gun ownership is suspect, because there's not much use for them except as weapons used against other people).

2

ulookingatme t1_j9qxy67 wrote

Sure, morals are based upon the social contract and self-interest. That's what I basically said.

1

Anathos117 t1_j9m1f2i wrote

> What the trolley problem teaches us is that those running a closed system should run it so as to minimize the loss of life within it.

Maybe, but that's absolutely not what people are using the Trolley Problem for, and we don't really need the Trolley Problem to reach that conclusion in the first place. The point of thought experiments is to isolate the moral dilemma from details that might distract from the core intuition, but that's worse than useless because those details aren't distractions, they're profoundly important.

0

XiphosAletheria t1_j9m3q8e wrote

I think the point of the thought experiment is to help people discover what their intuitions are, what the reasoning is behind them, and where that leads to contradictions. What's important about the trolley problem isn't that people say you should flip the lever. It's that when asked "why?" the answer is almost always "because it is better to save five lives than one". But then when it comes to pushing the fat man or cutting someone up for organs, they say you shouldn't do it, even though the math is the same. At which point people have to work to resolve the contradiction. There's a bunch of ways to do it, but hashing out which one you prefer is absolutely worthwhile and teaches you about yourself.

4

Anathos117 t1_j9m67db wrote

> There's a bunch of ways to do it, but hashing out which one you prefer is absolutely worthwhile and teaches you about yourself.

But again, it doesn't teach you anything generalizable. Someone who might balk at pushing the fat man might have no problem demanding a pre-vaccine end to COVID restrictions for economic reasons. So it might be intellectually stimulating, but not actually useful.

1

XiphosAletheria t1_j9n2j56 wrote

I think my main issue here is that I don't think "generalizable" is the same as "useful". I think learning to articulate your moral assumptions, then to interrogate them and resolve any contradictions as they arise are all useful, and really the whole point of philosophy.

Beyond that, I think a lot of the factors people come up with are in fact generalizable, at least for them. That is, once people have resolved the trolley problem to their own satisfaction, the factors they have identified as morally relevant will remain relevant across a range of issues. The trolley problem doesn't reveal much that is generalizable for people as a group, but because morality is inherently subjective, we wouldn't really expect it to.

1

Anathos117 t1_j9n50m4 wrote

> I think learning to articulate your moral assumptions, then to interrogate them and resolve any contradictions as they arise are all useful, and really the whole point of philosophy.

Again, not what most people are using thought experiments for, and "it's good practice for when you actually have to make a moral judgement about something completely unrelated" is hardly a ringing endorsement for their usefulness.

> the factors they have identified as morally relevant will remain relevant across a range of issues

I don't think they will be. People are weird, inconsistent, and illogical. You don't have some smooth culpability function for wrongdoing that justifies punishment once it rises above a certain threshold, you've got an arbitrary collection of competing criteria that includes morally irrelevant details like how well you slept last night and how long it's been since you last ate.

1