Anathos117 t1_j9m67db wrote
Reply to comment by XiphosAletheria in Thought experiments claim to use our intuitive responses to generate philosophical insights. But these scenarios are deceptive. Moral intuitions depend heavily on context and the individual. by IAI_Admin
> There's a bunch of ways to do it, but hashing out which one you prefer is absolutely worthwhile and teaches you about yourself.
But again, it doesn't teach you anything generalizable. Someone who might balk at pushing the fat man might have no problem demanding a pre-vaccine end to COVID restrictions for economic reasons. So it might be intellectually stimulating, but not actually useful.
XiphosAletheria t1_j9n2j56 wrote
I think my main issue here is that I don't think "generalizable" is the same as "useful". I think learning to articulate your moral assumptions, then to interrogate them and resolve any contradictions as they arise are all useful, and really the whole point of philosophy.
Beyond that, I think a lot of the factors people come up with are in fact generalizable, at least for them. That is, once people have resolved the trolley problem to their own satisfaction, the factors they have identified as morally relevant will remain relevant across a range of issues. The trolley problem doesn't reveal much that is generalizable for people as a group, but because morality is inherently subjective, we wouldn't really expect it to.
Anathos117 t1_j9n50m4 wrote
> I think learning to articulate your moral assumptions, then to interrogate them and resolve any contradictions as they arise are all useful, and really the whole point of philosophy.
Again, not what most people are using thought experiments for, and "it's good practice for when you actually have to make a moral judgement about something completely unrelated" is hardly a ringing endorsement for their usefulness.
> the factors they have identified as morally relevant will remain relevant across a range of issues
I don't think they will be. People are weird, inconsistent, and illogical. You don't have some smooth culpability function for wrongdoing that justifies punishment once it rises above a certain threshold, you've got an arbitrary collection of competing criteria that includes morally irrelevant details like how well you slept last night and how long it's been since you last ate.
Viewing a single comment thread. View all comments