

Discover more from Good Thoughts
Utilitarianism
Bernard Williams notoriously predicted that “utilitarianism's fate is to usher itself from the scene.” The dubious claim that utilitarianism is self-effacing (or recommends against its own belief and promotion) is often thought to follow from the more credible claim that constantly engaging in crude calculation would be counterproductive. But of course self-effacingness doesn’t follow from this, because belief in utilitarianism opposes, rather than requires, engaging in counterproductive behaviours like constant calculation.
Notably, prominent utilitarians across the generations (from Bentham and Mill to Peter Singer to Toby Ord and Will MacAskill) have, I think pretty obviously, done immense good. So I think there’s basically zero chance that utilitarianism is strictly self-effacing. On the contrary, I think it would be obviously good if more people followed in the footsteps of these wonderful individuals.
Realistically, no human being is going to look much like the Platonic form of an impartial utilitarian. But belief in the theory can at least serve to nudge us in the right direction. In reality, the practical effect of belief in utilitarianism is not neglecting one’s family or pushing fat men in front of trolleys (seriously, who does that?), but just stuff like giving more to especially effective charities and otherwise seeking to improve the world with one’s marginal uses of time and money. (As I’ve previously argued, decent non-utilitarians should of course agree with utilitarians on this — we should all embrace beneficentrism — but for unknown reasons, proportionately fewer non-utilitarians seem to actually prioritize beneficence in this way.)
Anti-utilitarianism
Now consider: on what moral view would you not want others to do more good? (You might not want to make altruistic sacrifices yourself, but isn’t it plainly good, from your perspective, for others to do so?) Given that beneficentrism is so obviously desirable, and that beneficentrism closely correlates with utilitarianism in practice, it seems that everyone ought to want utilitarianism to be more widely accepted.
This strikes me as a pretty curious result. Considerations of self-effacement don’t speak to the question of which moral theory is true, of course. So I don’t take this to be any sort of argument against non-utilitarian views. But it may be an argument against vocally advocating for less beneficent views in public spaces, or (say) blasting out anti-utilitarian screeds on Twitter. (Note that I’m definitely not recommending engaging in deceptive teaching or research. Indeed, I’m not recommending deception at all. But there’s no obligation to publicly broadcast everything you believe, especially in cases where you’ve reason to expect that broadcasting a belief would be harmful. So it at least seems a legitimate question whether broadcasting anti-beneficent messaging is really a good idea.)
Philosophers’ attitudes
I know many academic philosophers, in particular, have a weirdly negative view of utilitarianism. (Like, some hate it with a passion.) I’m not really sure why this is, but I’d like to encourage them to reconsider. I think some of it, at least, stems from misunderstanding the view, which is why a lot of my research focuses on trying to address those misunderstandings and present the view in a more appropriately sympathetic light. Some people may have a quasi-aesthetic aversion to (their conception of) the utilitarian perspective. They may believe that utilitarianism neglects some important normative insights, and may find this aggravating. (Philosophers are easily aggravated by what they believe to be philosophical mistakes.)
On other days, I would try to convince anti-utilitarian philosophers that they’re wrong on the merits. But today, let’s grant them the truth of their own view, for sake of argument. Still, however aesthetically aggravating it might seem for others to not adequately appreciate the significance of the personal perspective (or whatever), don’t you agree that it’s objectively more important to save more innocent lives? And if so, shouldn’t that maybe temper your frustration with views that encourage others to do more of this more important thing?
All in all, I think there’s a surprisingly strong case to be made that it’s non-utilitarian views that ought to “usher themselves from the scene”—or at least from the public sphere. Perhaps “government house deontology” can continue to be debated in philosophy seminar rooms. But if it’s really true that utilitarianism, as a public philosophy, would do more good (without actually violating rights etc.), then shouldn’t even deontologists prefer to see it reign supreme? I don’t mean that they should lie in order to promote this desirable result—obviously they could still regard lying as wrong. But even to acknowledge widespread acceptance of utilitarianism as a desirable result would, I think, mark a striking change from how most currently think about it. (And it at least raises tricky questions about the moral advisability of anti-utilitarian public philosophy.)
Is Non-Consequentialism Self-Effacing?
I'm not a utilitarian, but effective altruists who are saving lives are doing something very good and they should continue to be encouraged. Even most non-utilitarians can recognize that.
I think that with things like the giving 10% pledge, people have decided that it's good to treat it like a binary duty in some sense. Whereas in reality, you should give and give until you have hardly anything left. But that's unnappealing and it's probably not a good strategy to highlight this to get people to give more. You gave 10%? Great, now give XX% or else innocent human beings will die for trivial reasons. This is obviously true, but maybe it isn't so good for EA people to highlight this? Not sure.
The primary disadvantage of utilitarianism is that it obligates us to benefit non-reciprocators. There are two classes of non-reciprocators: Intelligent actors who choose not to reciprocate (extreme example: sadistic sociopath who will actively try to harm you as much as possible no matter how much you benefit him) and sentient systems who can't reciprocate (hedonium, far-future people without a time machine, chickens in a factory farm). Utilitarianism is not the worst ideology in this regard. But it's also not the most helpful. Selfishly, I'd rather people operate on the principle "Harm those game-theoretic agents who harm me, benefit those who benefit me", ie. something like the tit-for-tat strategy. I guess an even better principle would be "Always do the thing that most benefits Andaro's preference-satisfaction", but that's hardly a universalizable imperative that people will accept. So I'm happy to cooperate with cooperators and defect against defectors - without demanding self-sacrifice for the sake of non-reciprocators of any kind.
But let's grant for the sake of argument that we want to maximize hedons in the universe. Even then, what actual self-described utilitarians are doing is often not so great. Take the Repugnant Conclusion, for instance. I think the logical thing to do is to fully accept its mathematical formulation and conclusion. Clearly, 10 people enjoying 10 hedons per hour for 10 hours is less good that 10 trillion people enjoying 1 hedon per hour for 10 trillion hours. You'd have to be mathematically illiterate to not accept this.
And yet, when people talk about the RC, they make all kinds of absurd assumptions. Parfit should never have talked about the paraphernalia of poverty (potato and muzak etc.) in this context. There is no reason whatsoever to anchor the "life barely worth living" zero line at "humans living in poverty". The two phenomena are not remotely the same thing. Very poor people could hypothetically be immensely happy (think hedonium, or at least people genetically wired to enjoy everything enormously), or very rich people could be totally miserable and below the zero baseline. There are weird psychological anchoring effects that distort this debate forever.
It's also strange to see people default to bad proxy metrics like "number of lives saved", as if a saved life automatically had positive hedonistic utility and/or positive externalities for the rest of the world. That could be true in some cases, totally wrong in other cases, or perhaps even totally wrong in almost all cases. I've seen people use the "number of lives saved" metric to justify why suicidal people shouldn't have the right to kill themselves, as if a forced life of a person who actively wants to die could be assumed to have robustly positive hedonistic utility.
As for longtermism, I'm not even convinced they're getting the sign of their intervention right. If human extinction prevents more suffering than pleasure, they will still try to prevent human extinction, and therefore harming their own ostensive utilitarian goals. I'm not claiming that this is true. I don't know if it is. But if it were true, and completely predictable, people like Ord or McAskill would never get it right, for status reasons alone. They would be working in the wrong direction and no internal or external incentive would stop them from doing so.
Lastly, there is a weird silence surrounding the optimal solution space to utilitarianism. You'd think there would be a considerable number of EAs and academic utilitarians actively examining the possiblity of turning a decent fraction of the cosmic commons into hedonium, or at least finding ways to make the idea more appealing. You could do this without violating anyone's rights or the non-aggression principle, in any way. But no one does this. AFAICT, no one's even seriously looking into the possibility. Even if it doesn't pan out, maybe something close enough to it pans out. Turning even 0.001% of the cosmic commons into the next-best-implementation to hedonium sounds like it should be top priority for utilitarians. But it's weird and low-status, so instead they advocate the much more useless things that sound higher status.