Beneficentrism
Utilitarianism minus the controversial bits
Philosophical discussion of utilitarianism understandably focuses on its most controversial features: its rejection of deontic constraints and the "demandingness" of impartial maximizing. But in fact almost all of the important real-world implications of utilitarianism stem from a much weaker feature, one that I think probably ought to be shared by every sensible moral view. It's just the claim that it's really important to help others—however distant or different from us they may be. As Peter Singer and other effective altruists have long argued, we're able to do extraordinary amounts of good for others very easily (e.g. just by donating 10% of our income to the most effective charities), and this is very much worth doing. (This doesn’t require dedicating oneself exclusively to promoting the good. You might have several other central life projects, while giving at least some substantial weight to the project of beneficence.)
It'd be helpful to have a snappy name for this view, which assigns (non-exclusive) central moral importance to beneficence. So let's coin the following:
Beneficentrism: The view that promoting the general welfare is deeply important, and should be amongst one’s central life projects.
Clearly, you don't have to be a utilitarian to accept beneficentrism. You could accept deontic constraints. You could accept any number of supplemental non-welfarist values (as long as they don't implausibly swamp the importance of welfare). You could accept any number of views about partiality and/or priority. You can reject 'maximizing' accounts of obligation in favour of views that leave room for supererogation. You just need to appreciate that the numbers count, such that immensely helping others is immensely important.
Once you accept this very basic claim, it seems that you should probably be pretty enthusiastic about effective altruism. Not making any claims about "obligation" here, but just in terms of basic warrant or fitting attitudes: we should care about what's important, and effective altruism basically just is the attempt to put beneficentrism into practice, i.e. to act upon what we've just agreed is deeply important. (Of course, you might have any number of empirical disagreements with other effective altruists about how best to achieve this goal. Nothing here commits you to agreeing with them about such details. I just mean that you ought to be enthusiastic about the basic project.)
Beneficentrism strikes me as impossible to deny while retaining basic moral decency. (Cf. Stalin's "a single death is a tragedy, a million deaths are a statistic.") Does anyone disagree? Devil's advocates are welcome to comment.
Even if theoretically very tame, beneficentrism strikes me as an immensely important claim in practice, just because most people don't really seem to treat promoting the general welfare as an especially important goal. Utilitarians do, of course, and are massively over-represented in the effective altruism movement as a result. But why don't more non-utilitarians give more weight to the importance of impartial beneficence? I don't understand it. (Comments welcome on this point, too.)
One possibility is that the standard ideology of "obligations", "permissions", etc., encourages people to focus on meeting the bare baseline of moral adequacy. (Didn't murder anyone today, hooray!) But I think that's a bad ideology. We shouldn't just care about avoiding wrongdoing. We should care about what's important.
So I'd like to invite everyone, whatever your moral-theoretical persuasion, to explicitly consider what you think is truly important, and whether beneficentrism might be a part of the answer.
And if you're then enthusiastic (as I hope you might be) about making beneficence a more central aspect of your life, maybe consider the Giving What We Can pledge, and/or other ways to make a difference?
The problem with EA is precisely (one of) the issues you took with deontic minimalism (didn’t murder anyone today, hooray!)
EA maximizes the impact of the least possible effort. We may be great EAs, and we can pat ourselves on the back for spending time to find the best place to donate our 10%, but that is still only 10%! 100% effectiveness of a grape is nothing compared to 10% of a watermelon.
Perhaps EAs could recruit all the watermelons, BUT this IS the problem with EA: in order to be a watermelon (have enough money that your impact is actually an impact) you need to do some rather seedy things.
Instead of trying to figure out how to squeeze the juice equivalent to 10% of watermelon out of grape, EAs should spend their efforts (working and charitable) towards designing and implementing systems which achieve those ends without the sacrifice. And this is possible. It only requires a few watermelons to accept slightly more risk than they are used to, RATHER than all watermelons becoming EAs (fat chance).
If we switch perspective from “cash on hand and what to with it” to “how much more risk can I reasonably take on”, then we will naturally develop solutions to the same challenges EAs have rightly determined we should address (I won’t enumerate).
And to understand fair risk distribution we have to use deontic structures. Utilitarians do not have metrics for risk, only results.
Glad to see you on SS!! Enjoy ;)
One issue might be that there can develop a bit of a status competition/hierarchy as to who's being the most altruistic and that can be kind of off-putting sometimes.