26 Comments
Dec 1, 2023Liked by Richard Y Chappell

“So many Twitter critics confidently repeat some utterly conventional thought as though the mere fact of going against the conventional wisdom is evidence that EAs are nuts.”

If you replace the word “evidence” with “conclusive proof,” this accurately sums up literally every criticism of longtermism I’ve ever read.

Expand full comment
Dec 2, 2023·edited Dec 2, 2023Liked by Richard Y Chappell

I think that I can explain how someone can be hostile to EA ideas despite them being obviously true. It's not that they're hostile to the ideas themselves but rather to people consciously adopting them as goals. More generally I think that the EA criticisms you see are, low-decoupling style, mostly not criticisms of the ideas on their own merits but rather (very strong and universal) claims about the psychological and social effects of people believing the claims, which they think cause problems that are not unique to current EA institutions but basically intrinsic to any human attempt to implement EA ideas.

The claim is something like: the idea that you should seek to do cause impartial beneficent good is such a brain rot and so terribly corrosive in terms of what it does to human motivations that even though on paper it seems like there's no possible way that pursuing this idea could be worse than never thinking about doing it, in real life it just destroys you.

According to these critics, every time anyone tries to adopt this as a deliberate goal it's like picking up the One Ring, and you're near guaranteed to end up in ruinous failure because...? There are a bunch of reasons offered some of which contradict each other. One is that it smuggles in being okay with the status quo and not being okay with overthrowing modern civilization to produce something else. Another is that it sets you up to be easily manipulated because it sets a distant and broad goal such that you can justify anything with your biases and/or be tricked. Another is that it gives you a sense of superiority over everyone else around you and lets you take actions that are very distantly connected from doing good in the here and now, which means that you can always justify pretty much any bad thing you want to do as being part of the greater good. Another is that if you do believe in EA for real, it just corrodes your soul and stops you from having close human relationships and lets you neglect side constraints and instinctive warning signs that you're going wrong.

The claim isn't that any of these are intrinsic features of the ideas, but just that if you start believing strongly enough that you should do impartially beneficent good, because of the way human minds work, you'll get captured and possessed by this mindset and turn into a moral monster no matter what.

So on this view if you do care about impartial beneficent good you have to do something like trick yourself into thinking that's not what you really want and pursue some more narrow local project with more tight feedback loops. BUT of course you have to forget that this is why you did it, and forget the act of forgetting... doublethink style.

And obviously there's no real evidence given that this is how it necessarily goes other than pointing at a few high profile EA failures as if there aren't also high profile failures all over the place in more local and partial attempts to do good. (And as if the usually preferred alternative of starting an anti-capitalist revolution doesn't have every problem just listed to a far greater extreme)

It's essentially a conspiracy theory/genetic fallacy psychoanalysis argument. And this view also can't account for the good that EA has unequivocally done except to say something like "oh that all happened before you got fully corrupted/as an accidental contingent side effect on the way to full corruption".

And of course it's also diametrically opposite to the point you quote at the start of your post, i.e. EA ideas are both obvious tautologies and so extreme and strange that taking them seriously cores open your brain and makes you instantly turn into a moral monster.

Expand full comment
Dec 1, 2023Liked by Richard Y Chappell

Hard agree, and as someone who took the devil's advocate view in your previous post, I should say clearly: while there's much to quibble with, I think the mainstream EA movement is obviously tremendously thoughtful about these issues, and has done a great job promoting some obviously excellent, previously neglected causes. There is a huge amount to admire, both on an intellectual level and a movement level.

Expand full comment
Dec 4, 2023Liked by Richard Y Chappell

In my bubble, a prominent reason for not being impartially good is respect for one's parents and religion. Our parents teach us what is moral and we follow their example. When our parents cook a turkey on Thanksgiving, we appreciate that, and emulate it, and pass the tradition on to our own children. When our religion lays out moral rules, those who want to be good followers take these seriously, and put a lot of moral weight on widows and orphans and embryos in their local church.

Departure from these morals is difficult and often seen as disrespectful. When children become vegan and avoid the Thanksgiving tradition, that can lead to tough arguments and seems ungrateful. When people suggest that some number of shrimp would be more important than a human life, it's an attack on moral roots that are thousands of years old.

In my experience, and maybe contrary to your article, people are quite open thinking about how to lead a good life. But they (and also I myself) find it difficult to put the thoughts in practice, when they go against tradition and the wisdom of our ancients.

Expand full comment
Dec 2, 2023·edited Dec 2, 2023Liked by Richard Y Chappell

I think a lot people (e.g. Ross Douthat) worry that there's a slippery slope from thinking about effectiveness to more "sinister" aspects of utilitarianism. Of course this is a slippery slope fallacy in Theory, but many who recognize this still think it's a healthy part of the Practice of avoiding "naive maximization".

Expand full comment
Dec 2, 2023·edited Dec 2, 2023Liked by Richard Y Chappell

I'm not sure whether I care about animals more than you do. I'd guess that the major difference between myself and sentient-eating consequentialists is likely to be that I think the time is especially ripe for massive societal progress on the issue, particularly because almost all of the non-trivial tradeoffs at this point are the result of negative network effects (e.g. difficulty finding a restaurant, strain on relationships, time spent explaining) , whereas other forms of moral low-hanging fruit have substantially more inelastic costs.

Expand full comment

The case you are making only suffices to demonstrate that EA is yet another group that cares about doing good (which is great); but being a group that cares about doing good doesn't make it as unique as you seem to believe. (The "most people then just go and donate to the dog shelter" part of this makes it sound as you think the "normies" are very simple minded.)

For example, it isn't just weird utilitarian effective altruist nerds who care about factory farming. The anti factory farming movement has been around a lot longer than EA. Switzerland banned battery cages for hens in 1992, the whole EU in 2012 (a law passed in 1999). Was that the effective altruists?

Expand full comment