79 Comments
Nov 22, 2023Liked by Richard Y Chappell

Why does Srinavasan use the expected value of being an anticapitalist revolutionary as an example of something that is hard to quantify? There have been anticapitalist revolutionaries around for more than a century now and they have enough of a track record to establish that their expected marginal value is massively negative. Becoming an anticapitalist revolutionary is a rational thing to do if you want to maximize death and suffering. If EA philosophy stops people from becoming anticapitalist revolutionaries then it's already made the world a better place, even if they don't go on to do any good at all.

Expand full comment
Nov 23, 2023Liked by Richard Y Chappell

Others have said similar things, but to add my two cents:

First, I am sympathetic to, and probably count as, an EA, so I am not really the kind of person you are addressing, but I can think of a few things:

First, you really might disagree with some of the core ideas: you may be a deontologist, so that some proposed EA interventions, though positive in expectation, are still impermissible (e.g. a "charity" that harvests organs unwillingly from the homeless and donates them to orphans is bad, no matter how compelling your EV calculation). Or as Michael St. Jules points out, on longtermism, you might reject any number of the supporting propositions.

Second: Agreement with the core ideas doesn't imply all that much; you say to Michael that you are only interested in defending longtermism as meaning "the far future merits being an important priority"; but this is hardly distinctive to EA! If EA just means, "we should try to think carefully about what it means to do good", then almost any program for improving the world will endorse some version of that! What makes EA distinctive isn't the versions of its claims that are most broadly acceptable!

You can agree in principle with "core" EA ideas but think there is some methodological flaw, or a particular set of analytical blinders in the EA community such that the EA version of those ideas is hopelessly flawed. This is entangled with

Third: So, if you agree with the EA basics, and you think EA is making a big mistake in how it interprets/uses/understands those basics, why not try to get on board to improve the program? Either because those misunderstandings/methodologies/viewpoints are so central to EA that it makes more sense to just start again fresh, or because EA as an actual social movement is too resistant to hearing such critiques.

Like, take the revolutionary communist example from the other end: lots of people (even many EAs) would agree to core communist principles like "Material abundance should be shared broadly", and revolutionary ideas like "We shouldn't stick to a broken status quo just because it would take violence to reach a better world"--and there is a sense in which you can start as a revolutionary communist, and ultimately talk yourself into a completely different viewpoint that still takes those ideas as fundamental but otherwise looks nothing like revolutionary communism (indeed, I think this is a journey many left-leaning teenagers go through, and it wouldn't even surprise me if some of them end up at something like EA).

But I don't think people who don't start from the point of view of communism should feel obliged to present their critiques as ways of improving the doctrine of revolutionary communism. This is for both philosophical reasons (there is too much bad philosophy in there that takes a long time to clear out, better to present your ideas as a separate system on their own merits) and social ones (the actual people who spend all their time thinking about revolutionary communism aren't the kind of people you can have productive discussions with about this sort of thing).

Obviously that's an unfair comparison to EA, but people below have pointed out that EA-the-movement is at least a little bit cult-y, and has had a few high-profile misfires of people applying its ideas. I personally think its successes more than outweigh the failures, but I think it's fair for someone to disagree.

Finally, I'd like to try steelman the "become an anticapitalist revolutionary" point of view. Basically, the point here is that "thinking on the margin" often blinds one to coordination problems--perhaps we could get the most expected value if a sufficiently large number of people become anticapitalist revolutionaries, but below some large threshold, there is no value--then the marginal benefit of becoming a revolutionary is negligible, yet it still may be the case that we would wish to coordinate on that action if we could. This is (I think) what Srinivasan is getting at: the value of being a revolutionary is conditional on lots of other people being revolutionaries as well. It's not impossible to fit this sort of thinking into an EA-type framework, but I think it's a lot more convoluted and complicated. But I don't think we should rule it out as a theory of doing good, or prioritizing how to do good, even if I don't find that particular example very compelling.

Expand full comment
Nov 22, 2023Liked by Richard Y Chappell

An interesting case is that Émile Torres is among the best-known and most aggressive critics of effective altruism, and I recall them (very admirably) helping to run a fundraiser for GiveDirectly -- in fact via the GWWC website.

I really think it is worth taking seriously that the main concern is with the peculiar and sometimes troubling social scene that has sprung up around the EA idea. (And the adjacent and much more troubling rationalist social scene.)

If people let their (IMO justified) worries about the people and social dynamics bleed over a bit into their judgment of the philosophy, well, maybe that's a good heuristic if you aren't a professional philosopher.

Expand full comment
Nov 22, 2023Liked by Richard Y Chappell

I've seen a lot of bizarre criticisms of EA lately. If someone says EA fails in some way, I wonder what movement they find better.

Expand full comment

I wouldn't underweight the do-gooder derogation, I think that's most of it.

That instinct isn't merely projecting some vague internal sense of shame or guilt by turning it into hostility at that source making you feel that. It's often a threat reaction. Threats create fear and fear/threat reaction is the source of all hatred.

It always seems inexplicable and totally irrational when someone else has a hate-filled threat reaction, about something you don't personally value or care much about (or might assign a negative value to). Someone who gets all angry and hostile about proposals to ban or limit access to guns are incomprehensible to someone who doesn't like guns. To get unto this mindset you have to think of something that you truly LOVE. Or are addicted to. Without which life barely seems worth living, or like it would be unbearable. And then imagine someone trying to persuade others to ban it or at least make the social consequences of it very severe.

I could easily make a perfectly rational and statistically supported public health and safety argument for banning, for example, bicycles, or dogs, or the internet, or alcohol, or porn, or having more than one child, or experimenting with new technologies without democratically approved permission and government oversight. Easy to imagine the hostility those would generate.

A lot of people feel like life wouldn't be worth living, and are extremely emotionally attached to, their money and the idea that it should be distributed in any manner other than it is now. And also to their opinion of themselves as very rational, smart, moral and ethical. And they will react with hatred to any perceived threat to those things, or people advancing an argument that view as a threat.

On the topic of altruism more generally, I think this often goes further in a subset of people who have a wired in primal impulse to be repulsed by and hate those they perceive as weak and excessively compassionate. Some type of carryover from times with a much higher risk to basic survival, when one weak member of the tribe (or members being too compassionate to them or outsiders) could threaten the survival of the whole tribe. This is a pretty useless instinct nowadays but clearly still exists. If you search yourself, you can probably think of a few examples where you have a mild reaction of contempt for someone you view as being irrationally and stupidly bleeding heart oriented. Take that feeling and amplify it by a hundred and I think that's why EA gets hostility from some.

I don't put much stock in people's rationalized explanations for this stuff and think it's mostly emotional orientations with narratives layered on top.

And on that note, the overly academic, colorless and emotionless/non vivid language and insider vocabulary that most EA proponents use strike many as a distasteful and non self-aware status game, which turns people off.

Expand full comment
Nov 22, 2023Liked by Richard Y Chappell

On the question of earning to give I think there is a principled critique which doesn't rely on the edge cases regarding doing immoral work.

The critique involves what you are asking people to do: to split their life up in such a way that they go into the highest earning job in order to then donate it all without any further involvment on their part. Unless you're already a committed consequentialist this is a totally unreasonable thing to demand of people. What you do and what you value are just completely divorced from one another. This seems untenable (at least intuitively) to many people.

Expand full comment

This is so true!

Expand full comment
Dec 1, 2023Liked by Richard Y Chappell

I know this is a bit late, but I wanted to say one more thing, that I think maybe gets at the objection to EA in a different way.

Finance journalist Matt Levine wrote something relevant to this recently. I'll paraphrase what he says to avoid extended quotation:

1 the core of EA is that your charitable donations should focus on saving lives, not on getting your name put on buildings

2 but you can take a wider view and note that "Spending $1 million to buy mosquito nets in impoverished villages might save hundreds of lives, but spending $1 million on salaries for vaccine researchers in rich countries has a 20% chance of saving thousands of lives, so it is more valuable"

But...

3 there is no obvious limit to this reasoning; paying EAs high salaries to research how to save lives might have higher expected value than bed nets OR vaccines, so

4 "Eventually the normal form of “effective altruism” will be paying other effective altruists large salaries to worry about AI in fancy buildings, and will come to resemble the put-your-name-on-an-art-museum form of charity more than the mosquito-nets form of charity."

He says, "You do the extended-causality thing because … you think it is better? Because it has more capacity, as a trade — it can suck up more money — than the mosquito nets thing? Because it is more convenient? Cleverer? More abstract?"

And then he goes on to compare to carbon offsets, and I think a tidier way to express all the above, and the reason why EA and offsets and so forth go together well is because they are examples of the financialization of charity.

When people object (as in other comments in the thread) to attaching a numerical value to love, or whatever, I think what they are really objecting to is this sense that we've financialized it... Not just that there's a numerical value, but that value perfectly summarizes how we should treat love in terms of trades and trade-offs and so forth.

This kind of a response is more of a core disagreement with EA, and with utilitarianism more broadly: it can encompass deontological critiques, for example.

But even if you're ok with that, I think the cult objections, and the objections to earning to give, and so forth, still can flow from a certain critique of the finance-ness of EA.

Which is to say that, even normal, regular finance, the kind that's just about money and stocks and bonds, has a tendency to abstraction and opaqueness that has historically contributed to speculation, fraud, and other things of that nature. And I think people might feel of longtermism, or earning to give, or donating millions of dollars to deworming charities that might not accomplish anything, that they are sort of like the subprime mortgages of the charity world.

As with normal finance, efficiency is good, and only the craziest people think banks should make *no* efforts to find complex trades that they expect to pay off--but the more abstract and convoluted a trading strategy, the more divorced from the "real" economy, the more likely it is to just be Dutch tulips or bored apes.

And the fact that sbf was a prominent figure in both actual financial speculation and fraud, and in a certain kind of EA that I'm arguing is analogous, feels more like a core flaw in EA's approach than just a coincidence, or a bad apple, or whatever.

I think that's why I find myself in the middle: I'm not unsympathetic to many of the criticisms of EA the movement, just as I thought the NFT boom and the WeWork saga were examples of finance run amok, creating the illusion of value out of speculation and fraud--but I still think banks should make loans and mortgages, and I still think charities and donors should think about how to get more value from each charitable dollar they spend. The problem is, as Levine suggests, there's no bright line, no place to stop and say, "this is clearly as abstract and high level as we should be".

Expand full comment

Great essay. My thoughts on EA hate:

1. I don’t get why humans have the drive to leave no good deed unpunished, but this is core to a lot of people’s nature. Explaining that probably explains EA hate.

2. The movement of EA could be separated from the philosophy, there are some legitimate critiques of how the movement operates. Slippery arguments can conflate the two convincing naive people effectively doing good is wrong.

Expand full comment
Nov 22, 2023·edited Nov 22, 2023

"core EA claims on controversial topics (from “earning to give” to “longtermism”) are clearly correct"

That seems pretty disputable for longtermism, and I would say that longtermism is clearly not clearly correct. I don't mean that it's clearly false, and I definitely give some longtermist views moderate weight myself under normative uncertainty, but I can imagine someone reasonably giving it very little weight and practically ignoring it. It typically relies on many controversial claims, including about aggregation, population ethics, tractability, predictability and backfire risks, attitudes towards uncertainty/ambiguity, risk and decision theories. Bias and motivated reasoning can make people more likely to do things that are overall harmful, and this is easier to do when there's poor evidence and feedback, as is often the case for longtermist work.

(To be clear, I think other areas of EA are subject to bias and motivated reasoning, too.)

Longtermists also have some high-impact failures that should make them worry about the judgement of longtermists and the promotion of longtermism or its concerns: Sam Bankman-Fried and potentially accelerating AI risk. On the latter:

1. there are the recent events with OpenAI,

2. longtermists potentially being counterfactually responsible for the founding of OpenAI, funding of DeepMind (https://twitter.com/sama/status/1621621724507938816 and https://forum.effectivealtruism.org/posts/fmDFytmxwX9qBgcaX/why-aren-t-you-freaking-out-about-openai-at-what-point-would; see also the comments there), and

3. Open Phil supporting OpenAI's early growth (although Moskovitz disputes this https://forum.effectivealtruism.org/posts/CmZhcEpz7zBTGhksf/what-happened-to-the-openphil-openai-board-seat?commentId=weGE4nvBvXW8hXCuM) for board seats and influence it has now lost.

Expand full comment

You accuse Mary Townsend of motivated reasoning. Interestingly, she accuses Effective Altruists of motivated reasoning, too. She writes "The pathos of distance allows the good deed on another continent to take on a brilliant purity of simple cause and simple effect—you have no connection to the recipients of the hypothetical net beyond their receipt of your gift—and so you, back at home, can walk right past the homeless guy without having to look at him, or for that matter, smell him. You have purchased the carbon offset credits of the heart."

If you truly want to understand the viewpoint of a critic of EA, it seems to me that the obvious first attempt, akin to the philosopher's "Have you tried turning it off and on again?", would be to consider the possibility that the people you disagree with believe what they say they believe, for the reasons they say they believe it. In the case of Mary Townsend, she outlines a theory of value that is entirely in line with caring for the homeless guy in front of you over distant people to whom you send impersonal cash. She says that "almost every human good there is beyond mere accumulation of healthy days or years—for instance, the goods of justice, love, truth, and compassion—are not amenable to numbers, let alone predictable by dint of them." Accordingly, rather than attempting to calculate such things mathematically, we should presumably instead respond in a human, interpersonal way to the people around us. We cannot love someone with whom we cannot interact, so, even though it would be (hypothetically) just as virtuous to love someone who is halfway around the world, or who is not going to be born for another five hundred years, we should not try to do this because it is impossible.

I do not expect you to agree with her on this, obviously. But one might hope that you would not need to agree with it in order to consider the possibility that others might.

Expand full comment

Until you understand the critique, yes, your addressing it will feel strange. You don’t get it. And don’t even seem to want to. Which is so weird, indeed.

Expand full comment

I don’t know much about it, but from the outside and in the wake of SBF, EA can seem like an elitist social movement that seeks to recruit people with potential and teaches them that they have an obligation to seek out positions of power and wealth (‘earn to give’) because now they belong to a group that has figured out morality. In other words EA seems to have cult-like elements.

Expand full comment
Nov 22, 2023·edited Nov 22, 2023

Richard, considering you take the view that longtermism is common sensical (as I do), how concerned are you about AI risk? I know that Will MacAskill is not nearly as persuaded about the likelihood of that particular disaster as others, and I would be curious to know what you, another academic ethicist, thinks.

Expand full comment

Re: earning to give and billionaire philanthropy, impulsively disliking these things is not that

unreasonable, given a certain outlook on what morality is supposed to be.

I think the people that criticise EA on these points tend to think of morality in a very meta way - for them, it's a system that assigns praise (or blame) in a society, as opposed to the object-level, prescriptive stuff analytic philosophers like to talk about. You might say that they conceive of morality as an institution, and said institution is but one of the many levers you can use to change the way things are.

With that mindset, it's easy to interpret the standard EA spiel about donating (and the implicit acknowledgement that the good you do mostly scales linearly with the amount you donate) as a claim about people's virtue, since evaluating people's virtue is the only thing morality-as-an-institution ever does. Specifically, they read it as "that billionaire who rides around in a Lamborghini happened to have a sudden spasm of conscience and donated 1% of his fortune to GiveWell, so now society ought to regard him as literally 100x more virtuous than the guy who's been volunteering in soup kitchens for ten years of his life". Obviously this is pretty wild, and is not the direction they'd like public morality to take. Hence the strong negative reaction to (what they think is) effective altruism.

Not that there's any excuse for refusing to talk about object-level morality because "it's an institution, a historically contingent phenomenon!" - it's not like saying these magic words makes the normativity inherent in your every action go away.

Expand full comment

Unfortunately, as good as it might feel, sneering about capitalism or whatever on social media doesn’t actually help anyone very much, and the unflattering comparison to people who donate their kidneys and give money to effective charities must be very annoying.

Expand full comment