Viewing a single comment thread. View all comments

therealduckrabbit t1_iyktxnz wrote

Schopenhauer and Plato both address this issue in different ways. Plato describes Akrasia as weakness of will, where one knows what reason dictates but fails to pursue that goal. Though he does identify it as a phenomenon he can't explain it. Mostly because, as Schopenhauer points out, it is based on an empirically flawed moral psychology. Reason for Schopenhauer does not motivate us to act, desire is an exclusive function of the Will, which always motivates us to act. Reason is simply an instrument to guide us in efficiently and effectively fulfilling desire.

That doesn't mean rational approaches to ethics have no place. They are best utilized in collective tools like government to assure good outcomes when using public or finite resources.

The great articulator of this debate is Richard Taylor, the beekeeping philosopher, in his book Good and Evil. The most underrated philosophy publication of the last 100 years imo.

72

Cli4ordtheBRD t1_iyo5vaj wrote

Hopping on the top comment to provide more context on "longtermism" and "effective altruism", which I think the author was criticizing (but I'm honestly not sure).

First things first: humanity (in our biological form) is not getting out of our Solar System.

So the whole "colonize the galaxy" plan with people being born on the way is not going to work. Those babies will not survive because every biological system depends on the constant force of Earth's gravity. Plus their parents are probably not going to fare much better, as their bones density degrades over time and that lost calcium develops into painful kidney stones.

Here's an article from the Economist's 1843 Magazine that covers Effective Altruism (which is getting a lot of attention right now thanks to Sam Bankman-Fried having bankrolled the movement).

My perspective is that there are a lot of people with good intentions, but the intellectual leaders of the movement are ethically-challenged, who are at the "getting high on their own farts" stage, and it's being seized on by some of the absolute worst people (Elon Musk & Peter Thiel) to justify their horrible actions, with dreams of populating the stars.

>The Oxford branch of effective altruism sits at the heart of an intricate, lavishly funded network of institutions that have attracted some of Silicon Valley’s richest individuals. The movement’s circle of sympathisers has included tech billionaires such as Elon Musk, Peter Thiel and Dustin Moskovitz, one of the founders of Facebook, and public intellectuals like the psychologist Steven Pinker and Singer, one of the world’s most prominent moral philosophers. Billionaires like Moskovitz fund the academics and their institutes, and the academics advise governments, security agencies and blue-chip companies on how to be good. The 80,000 Hours recruitment site, which features jobs at Google, Microsoft, Britain’s Cabinet Office, the European Union and the United Nations, encourages effective altruists to seek influential roles near the seats of power.

#William MacAskill A 35 year-old Oxford Professor is the closest thing to a founder and has produced increasingly controversial positions.

>The commitment to do the most good can lead effective altruists to pursue goals that feel counterintuitive. In “Doing Good Better”, MacAskill laments his time working as a care assistant in a nursing home in his youth. He believes that someone else would have needed the money more and would have probably done a better job. When I asked about this over email, he wrote: “I certainly don’t regret working there; it was one of the more formative experiences of my life…My mind often returns there when I think about the suffering in the world.” But, according to the core values of effective altruism, improving your own moral sensibility can be a misallocation of resources, no matter how personally enriching this can be.

#Longtermism >One idea has taken particular hold among effective altruists: longtermism. In 2005 Nick Bostrom, a Swedish philosopher, took to the stage at a ted conference in a rumpled, loose-fitting beige suit. In a loud staccato voice he told his audience that death was an “economically enormously wasteful” phenomenon. According to four studies, including one of his own, there was a “substantial risk” that humankind wouldn’t survive the next century, he said. He claimed that reducing the probability of an existential risk occurring within a generation by even 1% would be equivalent to saving 60m lives.

>Disillusioned effective altruists are dismayed by the increasing predominance of “strong longtermism”. Strong longtermists argue that since the potential population of the future dwarfs that of the present, our moral obligations to the current generation are insignificant compared with all those yet to come. By this logic, the most important thing any of us can do is to stop world-shattering events from occurring.

#Going full Orwell

>In 2019 Bostrom once again took to the ted stage to explain “how civilisation could destroy itself” by creating unharnessed machine super-intelligence, uncontrolled nuclear weapons and genetically modified pathogens. To mitigate these risks and “stabilise the world”, “preventive policing” might be deployed to thwart malign individuals before they could act. “This would require ubiquitous surveillance. Everyone would be monitored all of the time,” Bostrom said. Chris Anderson, head of ted, cut in: “You know that mass surveillance is not a very popular term right now?” The crowd laughed, but Bostrom didn’t look like he was joking.

>Not everyone agrees. Emile Torres, an outspoken critic of effective altruism, regards longtermism as “one of the most dangerous secular ideologies in the world today”. Torres, who studies existential risk and uses the pronoun “they”, joined “the community” in around 2015. “I was very enamoured with effective altruism at first. Who doesn’t want to do the most good?” they told me.

>But Torres grew increasingly concerned by the narrow interpretation of longtermism, though they understood the appeal of its “sexiness”. In a recent article, Torres wrote that if longtermism “sounds appalling, it’s because it is appalling”. When they announced plans on Facebook to participate in a documentary on existential risk, the Centre for Effective Altruism immediately sent them a set of talking points.

>Chugg, for his part, also had his confidence in effective altruism fatally shaken in the aftermath of a working paper on strong longtermism, published by Hilary Greaves and MacAskill in 2019. In 2021 an updated version of the essay revised down their estimate of the future human population by several orders of magnitude. To Chugg, this underscored the fact that their estimates had always been arbitrary. “Just as the astrologer promises us that ‘struggle is in our future’ and can therefore never be refuted, so too can the longtermist simply claim that there are a staggering number of people in the future, thus rendering any counter argument mute,” he wrote in a post on the Effective Altruism forum. This matters, Chugg told me, because “You’re starting to pull numbers out of hats, and comparing them to saving living kids from malaria.”

>Effective altruists believe that they will save humanity. In a poem published on his personal website, Bostrom imagines himself and his colleagues as superheroes, preventing future disasters: “Daytime a tweedy don/ at dark a superhero/ flying off into the night/ cape a-fluttering/ to intercept villains and stop catastrophes."

I think this is ultimately driven by a whole group of people obsessed with "maximizing" instead of "optimizing". They want a number (to the decimal) about which option to choose and can't stand the thought of "good enough, but it could have been better". Essentially they're letting perfect be the enemy of the good and if we're not careful they're just going to slide into fascism with more math.

1