54 Comments
Apr 4Liked by Richard Y Chappell

Marxists: Every single leader we've ever produced has been a complete moral monster. This has no bearing on whether our view is bad. However, SBF's existence decisively refutes EA.

Expand full comment
Apr 4Liked by Richard Y Chappell

The greatest challenge weaving through these points is Plant's "Meat-eater's Problem": undercutting much of 10, greatly shifting 15 and greatly forking 19, underscoring 20, providing the clearest example for 25, and profoundly underscoring 31, for starters.

Expand full comment

Well you suggest that it's good to help people. That seems like it assumes utilitarianism, and thus you're committed to thinking that you should feed your children to the utility monster.

Or so I've heard from EA critics.

Expand full comment
Apr 4Liked by Richard Y Chappell

I think 13 (hits-based giving) is potentially objectionable, citing #1 on your list of things you don’t believe. Depending on how concerned you are about the general tendency to skew our reasoning toward our own benefit, it might be the case that even a little (attempted) hits-based giving results in a lot of self-serving waste, such that it doesn’t pencil out.

Expand full comment
Apr 4Liked by Richard Y Chappell

As someone who is not a part of the EA movement, but has generally viewed it sympathetically as essentially standing for the simple proposition that people who can afford to should give more of their money to charity and do so in the way that maximizes its impact, I find I agree with most of these points, and appreciate the post as a deeper dive into it. I also think that a lot of the criticism I've read seems more driven by a distaste for those involved in EA that's not really driven by objections to EA itself. For example, the recent Wired article seemed a more persuasive case against giving to charity generally than against EA.

However, I do think points 12 and 22 are potentially problematic for a couple of reasons. Together, they seem to stand for the proposition that it's self indulgent to engage in hands on efforts to improve the world when you have the capacity to make a lot of money doing something else and use the proceeds to fund a greater impact than your hands on efforts could. Thus, a plastic surgeon would be self-indulgent to take a low paying job working for an NGO treating burn victims in a war zone, when they could make a whole lot of money running an aesthetic practice on Park Ave and fund the salary of a dozen doctors at that NGO. An educator should resist the temptation to work in an underperforming public school in favor of starting a test prep company for the affluent and using the proceeds to provide greater resources for underperforming schools.

One problem with this approach is that successfully diverting skilled people from working directly to better the world in favor of a focus on making money would mean that at some point there would not be skilled people working to directly better the world.

The second problem is that what "via permissible means" refers to is not obvious and it's doing a lot of work. In the above examples, both running a test prep company and running an aesthetic plastic surgery practice seem morally neutral superficially, but it's hard to gauge the large scale impact of different kinds of work. Even for these, one could be said to contribute to harmful views on body image and the other to exacerbating inequality in access to higher education. Other lucrative careers are more murky-working in finance may mean helping to increase the wealth of companies which will in turn use it to fight against environmental regulations, etc.

Then there's the problem of weighing adjustments to the permissibility of one's work against the potential gains in profits and the resulting good one can do, which seems like it could lead to a slippery slope. Perhaps the plastic surgeon can double his profits by using substandard materials with only a small risk to his patients for example. I have not followed the SBF situation closely, so don't know if this is the kind of reasoning that led to it, but it seems at least superficially plausible that it could.

Expand full comment
Apr 4Liked by Richard Y Chappell

I'm pro-EA, but my one misgiving is something like the following: certain kinds of actions are more legible to the EA framework than others, and I worry both that this introduces a bias as a whole to the way EA evaluates interventions, and maybe more worryingly that if EA became widely enough adopted it could have perverse incentives to ignore or exacerbate solvable problems because they're not as legible.

I think this sort of objection can be covered by some of the points you make above, and it's certainly not unique to EA, but it makes me somewhat more sympathetic that while more EA is better on the current margin, universal adoption of EA norms makes me a little more nervous.

As an example of what I mean: consider the current war in Gaza (a less controversial topic than EA itself :P). I think EA-style analyses will more easily be able to evaluate interventions like increasing food donations and vaccines, but will struggle a lot more to evaluate interventions like "write a letter to your congressperson encouraging them to insist on conditional arms sales to Israel", or whatever... But it may be that the most effective thing to do for Gazans is to compel Israel to end the war, even though the contribution of any individual action to that outcome is basically unanalyzable. Obviously, the EA view is seeing something true and important: while the war is ongoing, we should absolutely do what we can to get food and medicine into Gaza--but I wouldn't want people to ignore the importance of building a political coalition to shape US-Israeli political relations.

Fwiw, my framing above suggests a pro-Palestinian point of view, but you can make the same style of argument in the other direction: one might think that the best thing to do for Gaza is to overthrow Hamas and so individual actions that contribute to the US ensuring Israel has enough freedom of action to achieve this are important, but they will be almost impossible to quantify.

In our current world where we have a zillion marches for Palestine that achieve nothing, and where people think giving food to Gazans is some underhanded ploy by Biden, I think more of the EA thinking is necessary.... But I can imagine a world where we look at warzones and only think "how can I get more vitamin A into this warzone?" and not "can we do anything to end this war?" because the latter is too hard to quantify, especially at the level of individual actions.

Expand full comment
Apr 4Liked by Richard Y Chappell

Nothing is obviously objectionable. These are good thoughts and I think that your perspective is quite palatable to the average smart person. The tiling the world and doubling are quite repugnant, although they could be defended on strictly utilitarian grounds. While those sorts of arguments belong in philosophy journals, I think these belong on the forum and Substack.

The most powerful critiques of EA would need to be pretty sophisticated and not necessarily obvious. For example, as one commenter below noted -- the meat eater problem, more people and a longterm future might mean lots of animal suffering. Another would be the sort of collective-action dynamics / evolution of altruistic behavior and fertility as another noted. But these ideas are like a 9/10 on the repugnance scale and need to be defended carefully. I think the person best equipped to do that would be an EA actually.

Overall, EA seems to be the best/smartest large organizations/social movements out there.

Expand full comment
Apr 4Liked by Richard Y Chappell

Great piece, and I agree! The one thing I would say that should have a bit more epistemic humility on would be that “Ethical cosmopolitanism is correct: It’s better and more virtuous for one’s sympathy to extend to a broader moral circle.” This is because, although it does fit with our WEIRD psychologies, historically this has not and there is very little you can do to convince someone who doesn’t have those egalitarian intuitions to care about ethical cosmopolitanism. It’s also a little more complicated considering that I assume even someone like Peter singer ethically cares about his mom more than a random stranger in Uganda. Nonetheless, a very great read.

Expand full comment
Apr 4Liked by Richard Y Chappell

31 (risk of appearing boastful makes talking about the good you do *more* praiseworthy) might have a good chesterton’s fence argument against it, somewhere. This is a half-baked take, so maybe someone else can refine it, but I haven’t explored the strongest arguments in favor of boast-avoidance as a social norm enough to feel confident in rejecting them.

Expand full comment
Apr 6Liked by Richard Y Chappell

I'll come and bite as well. No one has voiced what I think is the main and real point of disagreement with EA: A lot of people don't believe you can do or are in fact doing all of these things.

1) Can you do it? Is it possible to quantify "good" at all? And does the EA movement have the ability to quantify it accurately?

Is it possible to quantify good? There are several reasons to doubt this: (a) people disagree on what is good (more on that below); (b) good is very hard to quantify for lots of real-world-is-messy normal social science reasons; (c) good is generally a two-edged sword, involving good actors (virtue) and good actions (consequentialist impact).

More specifically, can EA people quantify the good? Well, SBF couldn't even work properly with money, and that's much more easily quantified. And one of the criticisms that actually hit home in that horrible Wired article the other week was the fact that the GiveWell rankings seem to have experienced significant churn. The methodology has not been shown to be sound enough.

2) Are EA people really doing what they say? Or are they virtue signalling?

It's a commonplace that anyone or any group who says "we're doing good" ultimately proves to be... not that. See: Catholic church. I think this is a really good heuristic! We are bombarded by cult messages all the time, from various churches, to Trump, to L'Oreal, to boomers, to... all groups of society who believe that they know better and are better than the rest of us. We treat these messages with the utmost suspicion, and rightly so.

When faced with a new group who claim to be doing good in the world, the little Bayesian homunculus in our heads considers two possibilities: (1) these guys have got it right and are telling the truth; (2) these guys are as deluded as all the others, and mostly just scratching an itch for group identity and moral superiority. And (2) wins every time, because based on previous experience, it's 100x more likely. To overcome that Bayesian gradient, EAs will just have to be patient and demonstrate that they are who they say they are, over a period of decades.

***

Now to a couple of specifics:

(3) "global poverty, factory-farming, future pandemics" - lots of people genuinely don't believe in those as moral areas. Either god made us rich and poor; or it's not the USA's job to cure global poverty; animal suffering is not a thing; Covid (some bullshit). I personally don't agree with any of those positions, but you must know that there are very large numbers of people who think each of those. And here's the problem: as soon as someone finds one position that EA stands for which they disagree with, that means that EA no longer, in their eyes, stands for what is morally good. And that instantly throws every single other EA calculation into question - even if that person might have agreed with those calculations otherwise.

This is already too long, so I'm going to stop. I personally am pro-EA. But I don't think it's hard to see why commentators would enjoy picking on it. Commentators gonna commentate!

Expand full comment
Apr 6Liked by Richard Y Chappell

Well put, I agree with just about all of these. And I'm glad that you wrote this up, and that you're writing these posts generally, given how hostile many philosophers (of all people) seem to be (have become?) towards EA.

> If there’s a risk that others will perceive you negatively (e.g. as boastful), accepting this reputational cost for the sake of better promoting norms of beneficence is even more virtuous. Staying quiet for fear of seeming arrogant or boastful would be selfish in comparison.

Hmm, I don't think staying quiet to avoid being perceived as arrogant or boastful is necessarily selfish. Arrogance and boastfulness can make other people feel bad -- for example, if someone boasts that they own a large apartment, or donate $5K per month, it could cause the listener to feel inadequate or worthless or resentful. I would suppose that's one reason why we do have norms against those things. (Of course the negative effect may be smaller than the increased likelihood -- if there is one -- that others act more virtuously as a result, or it may be larger, who knows.)

Expand full comment
Apr 4Liked by Richard Y Chappell

Very good piece, thanks !

Expand full comment

I used to read objections from critics of EA some years ago and never found them very compelling. I haven't seen more recent critiques. I think it'd be worth directly engaging critics in conversation. They may raise legitimate concerns, if not with EA in principle, than at least in practice, and one might have some success in persuading them that the less defensible objections don't withstand scrutiny.

Expand full comment

Re Point 27:

Consider the case of two prosperous and highly-intelligent individuals, A and B. A is an effective altruist who donates 75% of his ample income to charities approved by GiveWell. Mindful of the ecological impact of human population growth, moreover, A scrupulously avoids procreation, eventually dying with a clear conscience and zero offspring. B, on the other hand, contributes nothing to charity, either during his lifetime or posthumously, leaving his entire estate to the four surviving children he had by two wives. Which deserves more credit for "making the world better," A or B? Would it make the world better if ALL highly-intelligent people followed the same course as A?

Expand full comment

Great summary, thanks for taking the time to write this! ☺️

Expand full comment