32 Comments
Aug 30, 2022Liked by Richard Y Chappell

Do you have an argument for beneficentrism, or just take it to be self-evident or intuitive? FWIW I would be inclined to accept some version of it, but that's based on various background moral commitments that motivate beneficence as a central moral principle. (E.g., I'm sympathetic to overlapping consensus as a moral methodology, and it aligns with my Christan faith, pace TonyZa's gloss)

Expand full comment

"Beneficentrism strikes me as impossible to deny while retaining basic moral decency. Does anyone disagree? Devil's advocates are welcome to comment."

That depends on how you define basic moral decency or if you even believe that there is such a thing as a moral tenant that is universal.

But the truth is that most moral/religious/ideological systems in history didn't place much value on utilitarianism/beneficentrism. Ancient Greeks, Romans, Vikings and Mongols were not big on utilitarianism. Even Christianity, which puts a lot of weight on Works, is first concerned with Faith. Marxists might look utilitarian, but they see class struggle, the Revolution, and the accompanying blood shedding as taking priority with utopia happening only in the last phase. Nietzsche saw utilitarianism as the defining characteristic of a slave morality.

Expand full comment
Apr 18, 2023·edited Apr 18, 2023

You don't need me to remind you that there's a fascinating disconnect between what we think (and say) we should do and what we actually do.

I believe understanding the actual reasons behind that paradox is the necessary first step. "The Righteous Mind" by Haidt is one of my favorite books.

A personal theory of why we will never see eye to eye on everything is the completely logical and obvious one: we all have different perspectives, quite literally speaking.

I believe if people were forced to secretly face a trolley problem from the perspective of being the one guy or gal tied to the track, almost everyone would choose to let the train run over X strangers instead, where X is an absolutely depravingly high number.

Many people probably truly feel their lives are more valuable than the entire universe, because what good is the universe to them from the perspective of them being dead, right? They'll never admit it, obviously.

People's true perspectives are demonstrated via their actions every single day, f.ex. when they go on vacation instead of donating to malaria nets, or silently refrain from joining your mentioned pledges.

Expand full comment

The problem with EA is precisely (one of) the issues you took with deontic minimalism (didn’t murder anyone today, hooray!)

EA maximizes the impact of the least possible effort. We may be great EAs, and we can pat ourselves on the back for spending time to find the best place to donate our 10%, but that is still only 10%! 100% effectiveness of a grape is nothing compared to 10% of a watermelon.

Perhaps EAs could recruit all the watermelons, BUT this IS the problem with EA: in order to be a watermelon (have enough money that your impact is actually an impact) you need to do some rather seedy things.

Instead of trying to figure out how to squeeze the juice equivalent to 10% of watermelon out of grape, EAs should spend their efforts (working and charitable) towards designing and implementing systems which achieve those ends without the sacrifice. And this is possible. It only requires a few watermelons to accept slightly more risk than they are used to, RATHER than all watermelons becoming EAs (fat chance).

If we switch perspective from “cash on hand and what to with it” to “how much more risk can I reasonably take on”, then we will naturally develop solutions to the same challenges EAs have rightly determined we should address (I won’t enumerate).

And to understand fair risk distribution we have to use deontic structures. Utilitarians do not have metrics for risk, only results.

Glad to see you on SS!! Enjoy ;)

Expand full comment

Two devil's advocate points:

1. general wellbeing is hard if not impossible to quantify

2. historically, appeals to doing what's best for the general good have been used to justify countless violations of individual rights.

Expand full comment

I feel like this doesn't need an essay or a new term.

Valueing the general wellbeing of other creatures should be the most basic assumption in basically any discussion, and if someone disagrees they simply don't deserve to be in a civilization or have their ideas taken seriously.

Sam Harris put this beautifully once, (I can find the video if you want me to, he was talking with Singer and a few other people), "if someone thinks bleeding, being in pain all day and almost dying is healthy, because their definition of health is different, you simply don't invite them to the health conference" (a paraphrase)

Expand full comment

I'd love to know if you had any reflection on. what Nadia wrote, which is less a deep philosophy take but more a practical philosophy take of EA and EA adjacent ideas being "ideas machines". Also have you read the recent Larry Temkin book looking at EA thinking (though most wrt to global poverty EA). https://forum.effectivealtruism.org/posts/CbKXqpBzd6s4TTuD7/thoughts-on-nadia-asparouhova-nee-eghbal-essay-on-ea-ideas

Expand full comment
May 8, 2022·edited May 8, 2022

One issue might be that there can develop a bit of a status competition/hierarchy as to who's being the most altruistic and that can be kind of off-putting sometimes.

Expand full comment