Tinac4
Tinac4 t1_j5j2lfa wrote
Reply to comment by hammersickle0217 in Professor Martha C. Nussbaum on Vulnerability, Politics, and Moral Worth with Sam Harris by palsh7
Unless unddit missed something, your removed comment was was “Obligatory down vote, for Sam.” and nothing else. What makes you think that the mods removed your comment because it criticized Harris, and not because it violated commenting rules 1 and 2?
Tinac4 t1_iyqec3c wrote
Reply to comment by Phil003 in How to solve moral problems with formal logic and probability by beforesunset1010
Great comment! Thanks for the thorough explanation.
Tinac4 t1_iym8rv6 wrote
Reply to comment by chrispd01 in How to solve moral problems with formal logic and probability by beforesunset1010
How does the driver decide that one situation is “safe enough” while the other one isn’t? What’s the right choice if the odds of an accident were somewhere in the middle like 0.01%?
I’m not saying that there’s an objective mathematical answer to what “safe enough” means. There isn’t one—it’s a sort-of-arbitrary threshold that’s going to depend on your own values and theory of ethics. However, these situations do exist in real life, and if your theory of ethics can’t work with math and probabilities to at least some extent, you’re going to get stuck when you run into them.
Tinac4 t1_iylss65 wrote
Reply to comment by cutelyaware in How to solve moral problems with formal logic and probability by beforesunset1010
I didn’t say anything about using numbers to justify morality, and neither did the OP. My point is that a lot of real-life moral dilemmas that involve uncertainty, and it’s very hard to resolve them if your moral framework isn’t comfortable with probabilities to some extent. For instance, how would you respond to the two scenarios I gave above?
Tinac4 t1_iylruah wrote
Reply to comment by cutelyaware in How to solve moral problems with formal logic and probability by beforesunset1010
Math isn't only a tool for utilitarians, though. The real world is fundamentally uncertain--people are forced to make decisions involving probability all the time. To use an example in the essay, consider driving: If there's a 0.0000001% chance of killing someone while driving to work, is that acceptable? What about a 5% risk? Most deontologists and virtue ethicists would probably be okay with the first option (they make that choice every day!), but not the second (also a choice commonly made when e.g. deciding not do drive home drunk). How do they draw the line without using numbers on at least some level? Or what will they do when confronted with a charitable intervention that, because the details are complicated, will save someone's life for $1,000 with 50% probability?
A comprehensive moral theory can't operate only in the realm of thought experiments and trolley problems where every piece of the situation is 100% certain. They have to handle uncertainty in the real world too, and the only way to do this is to be comfortable with probabilities, at least to some extent.
Tinac4 t1_ittnxm7 wrote
Reply to comment by Rayden117 in Peter Singer Is the Philosopher of the Status Quo by TuvixWasMurderedR1P
I think you’re conflating the Gates foundation in with a lot of other flawed charities that aren’t much like it. For example:
- “Catastrophic at dealing with social problems”: Outside of some controversies regarding US education, the BMGF doesn’t seem to have caused much harm, while undoubtedly accomplishing a lot of good in global health. Some other charities are useless or counterproductive, but I don’t think that applies here, certainly not on net.
- Overhead: Unlike the Red Cross, BMGF is one of the charities that accomplishes a lot of good without wasting everything on overhead. I feel comfortable saying this without citation; you can look up their vaccination programs if you want. Moreover, it’s overall effectiveness, not overhead, that matters in the end (and I’m not aware of the BMGF having excessive overhead anyway).
- The BMGF is not intended as a substitute for government, nor does it substitute for it in practice. Most of its global health programs are done in countries that lack healthcare or social safety nets due to a combination of poverty and corruption; this is unlikely to change if the BMGF disappears. It’s an organization that focuses on improving some short-term aspects of health and well-being, and most of its long-term aspects (I think) revolve around eradicating diseases rather than large-scale economic development.
Regardless of what the BMGF has sometimes done wrong (any sufficiently large and complicated charity will screw up somewhere), they’ve very plausibly saved tens of millions of lives so far. Most of the above criticisms fall flat after taking this into account.
Tinac4 t1_ittmfx9 wrote
Reply to comment by ilolvu in Peter Singer Is the Philosopher of the Status Quo by TuvixWasMurderedR1P
I don’t think that’s quite right. Gates does control the Foundation, but as a nonprofit, he can’t just spend its money on anything he wants—I’m pretty sure that it does have to get spent on charity in some way instead of yachts or mansions. (I’d like a source if you disagree.)
Tinac4 t1_itqsl92 wrote
Reply to comment by WarrenHarding in Peter Singer Is the Philosopher of the Status Quo by TuvixWasMurderedR1P
>But let me ask you this - with Bill Gates' charity giving away billions of dollars constantly, how does he continue to make more and more money every single year?
None of the money he's getting comes from the BMGF, so presumably it's because he owns a huge amount of stock in one of the largest (and still-growing) tech companies in the world.
>For example, if you look up where he's sending it, do you think he's putting it all directly in the hands of those who need it?
Yes, I think so. The foundation doesn't have a 100% perfect track record in every area, but it's pretty darn good, especially regarding vaccine campaigns in developing countries.
>Because the charity has also donated billions to other companies, and hundreds of millions to those they have stocks and bonds in.
Which other companies, specifically, and what amount of that isn't just the BMGF investing its funds in the long term (which is a good choice if they can't spend everything on short notice)? How does Gates get any of this money back, and how does the overall amount invested compare to the ~$20 billion donated to global health causes?
Tinac4 t1_itqrkxy wrote
Reply to comment by MrPezevenk in Peter Singer Is the Philosopher of the Status Quo by TuvixWasMurderedR1P
Sure, but he still goes further than 99.something% of people in his income bracket. 40% is a pretty substantial chunk of income even if he's making (say) 200k/year. As for why he doesn't donate more:
>"I just accept that I'm not a saint. There are people in my book who are better than I am, people who've donated a kidney to a stranger. I still have two kidneys. And I could certainly live more parsimoniously and donate more as a result."
>...
>"On the other hand, maybe it's the people like you who aren't giving – or who are working their way up to giving 1 per cent – who make me feel, 'Look, I'm not such a bad guy, I'm giving more than most.'"
Tinac4 t1_itqqijs wrote
Reply to comment by NotABotttttttttttttt in Peter Singer Is the Philosopher of the Status Quo by TuvixWasMurderedR1P
Singer donates 40% of his income to charity.
Tinac4 t1_itqqhvu wrote
Reply to comment by GrogramanTheRed in Peter Singer Is the Philosopher of the Status Quo by TuvixWasMurderedR1P
Thanks for taking the time to explain! I think I understand a little better now. It does seem like the difference of opinion is going to come down to how easy to find and how common those transformative policies are--although I think you could plausibly put them into the same category of explorative research, where you cast a wide net to find a few major discoveries.
Tinac4 t1_itq5jrg wrote
Reply to comment by WarrenHarding in Peter Singer Is the Philosopher of the Status Quo by TuvixWasMurderedR1P
I can buy that some billionaire philanthropy uses charity as a front for tax evasion, but I'm not convinced that's true for all of it. For example, as far as I know, money that gets put into the Bill and Melinda Gates Foundation can't just be taken back out and spent on a superyacht--it's not really Gates' money anymore, and there are rules regarding what he can do with it. Gates doesn't have to pay taxes on the money that he puts in, sure, but he's certainly not making any money for doing this (especially when the donated money is in the form of stock shares that he doesn't have to pay taxes on in the first place). Is the BMGF really turning a profit for Gates, and if so, can I have a source proving this?
Again, I'm not saying that there aren't charities out there that are just fronts for tax evasion--there very plausibly are--and I’m also not saying that billionaires are beyond criticism or that we shouldn't raise taxes on them, but I do think that a decent chunk of billionaire philanthropy is actually philanthropy. Plus, the BMGF is a very salient example of billionaire philanthropy, so if the BMGF isn't a tax evasion scheme I'd be wary of painting with as wide a brush as you are.
Tinac4 t1_itq21fx wrote
Reply to comment by GrogramanTheRed in Peter Singer Is the Philosopher of the Status Quo by TuvixWasMurderedR1P
The difference between your stance and Singer’s has nothing to do with courage—it’s almost exclusively a matter of epistemics. Singer thinks we live in a world where systemic change is hard and where he would have a very small chance of accomplishing anything if he switched away from charity to advocate for it full-time. You (apologies if I’m making any bad assumptions) think we live in a word where systemic change is somewhat easier and where Singer would have a substantial chance of making concrete changes if he pushed for it. Courage doesn’t factor into it—and unless criticism explicitly focuses on why systemic change is easier than Singer thinks, it’s going to miss the mark.
(Another possible difference of opinion is that Singer is more risk-averse—that given a choice between saving 1 life with certainty and 101 lives with a 1% chance, he’d pick the former—but since he’s a utilitarian, and it’s hard to do utilitarianism without being at least somewhat comfortable with expected value theory, I doubt that’s his main objection.)
Tinac4 t1_itpw3yy wrote
Reply to comment by glass_superman in Peter Singer Is the Philosopher of the Status Quo by TuvixWasMurderedR1P
Singer doesn't advocate for giving to charity because he thinks it'll miraculously solve poverty--he advocates for it because it simply makes the world a better place. If you lived in a hypothetical world where you knew that you couldn't personally accomplish any political changes, and you saw a child drowning in a nearby lake, would you jump in and rescue them, or would you continue walking because saving the kid wouldn't solve any of the systemic problems of our economic system?
The question of whether to spend effort on getting people to donate to charity vs getting people to push for policy changes isn't so easy to answer when you factor in likelihood of success. Political change is quite difficult for any person to accomplish--there's no shortage of left-wing academic figures who got a lot of attention advocating for change but had little impact overall. In contrast, Singer has been extremely successful at getting a lot of people to donate to charity. What's better: A high probability of convincing 1,000 people to donate and save 10,000 lives, or an unknown but probably very low probability of convincing the entire US to reform its political system? Do you save the drowning children in front of you or do you gamble on a tiny chance of a vastly higher payoff?
(Plus, you can multitask by donating to charity and also voting for good politicians or policies. Singer votes, and isn't silent about who he votes for.)
Tinac4 t1_is4ve4q wrote
Reply to comment by water_panther in The philosophy of "longtermism" and Stoicism by cleboomusic
>The problem is not that longtermists go around saying "We really ought to do more genociding," it's that longtermists go around arguing that essentially any present-day sacrifice short of human extinction is trivially easy to justify according to their deeply wonky assumptions about the future.
I don't think they do this either.
Like, if I go to the EA forum (probably the online community with the biggest longtermist population) and look for highly-upvoted posts about longtermism and ethics, I find things like this and this and this and this. There is a lot of skepticism about naive utilitarianism, and an abundance of people saying things like "being a hardcore utilitarian can lead you to weird places, therefore we should find a theory of ethics that avoids doing that while keeping the good parts of longtermism and utilitarianism intact". Conversely, there's a total lack of posts or responses that take the opposite stance and say, actually, we should accept literally every crazy consequence of naive longtermism, and it's completely morally okay to sacrifice millions of people if it reduces the odds of humanity's extinction by 0.01%. Seriously, I swear I'm not cherrypicking examples, this is what the average post about longtermist ethics looks like.
You insist that longtermism is intrinsically built around exotic hypotheticals and a willingness to make horrible sacrifices--but if that's true, then why do they spend even more time picking those claims apart than their biggest critics do?
I think you could reasonably argue that longtermists need to spend more time working on the philosophical foundations of their movement, to find a way to reconcile the good parts of utilitarianism with the bad parts (and I bet most longtermists would agree!). I think you can't argue that the core of longtermism--"the view that positively influencing the long-term future is a key moral priority of our time"--is a stance that can only be justified by the bad parts.
Tinac4 t1_irzt5sn wrote
Reply to comment by water_panther in The philosophy of "longtermism" and Stoicism by cleboomusic
The only cases where I've seen longtermist reasoning used in favor of genocide are when non-longtermists try to reductio longtermism and/or utilitarianism with weird unrealistic hypotheticals. These problems aren't new to utilitarianism, but I don't think it makes much sense to be concerned about them when 1) most longtermists I've read about aren't actually hardcore utilitarians, 2) real-life, non-strawmen longtermists don't advocate for genocide, and 3) real-life hardcore utilitarians that I'm familiar with about spend zero time thinking about genocide and quite a lot of time stressing about whether they should be donating more to charity.
The implications of weird hypothetical thought experiments are only as serious as people take them to be; i.e. not very.
Tinac4 t1_irzsu70 wrote
Reply to comment by koron123 in The philosophy of "longtermism" and Stoicism by cleboomusic
Isn't the assumption that humanity is probably going to get wiped out within the next few thousand years also bold? It's far from impossible, but so is humanity's survival--I'd call being highly confident about either possibility bold.
Tinac4 t1_jcflm52 wrote
Reply to Bentham’s Mugging: A dialogue on how to exploit utilitarians by JohanEGustafsson
>BENTHAM. Fair enough. But, even so, I worry that giving you the money would set a bad precedent, encouraging copycats to run similar schemes.
>MUGGER. Don't. This transaction will be our little secret. You have my word.
Fun thought experiment! I think the easiest way for utilitarians to respond is to zero in on this section.
The scenario seems like it's inspired by Newcomb's problem. A utilitarian who one-boxes in Newcomb's problem--i.e. who endorses a decision theory that tells them to one-box and to accept point 2 above--won't have any issues with muggers.