21 Comments

Here's one way to make a consequentialist critique of EA as it currently exists.

Consider the US-China status quo. The US is not attacking China in pursuit of regime change, and China is not conquering Taiwan. The risk of the former seems minute; the risk of the latter does not. What if a 5% increase in the chance that this status quo holds was more of a net positive than all non-x-risk-related EA efforts combined?

Here are some of the possible negative outcomes if China tries to conquer Taiwan:

-conventional and nuclear war between China and the US, and their allies, with the possibility of up to several billion deaths;

-hundreds of satellite shootdowns causing Kessler syndrome, leading to the destruction of most other satellites, leaving us with little warning of impending natural disasters such as typhoons and drought;

-sidelining of AI safety concerns, in the rush to create AGI for military purposes;

-end to US-China biosecurity cooperation, and possible biowarfare by whichever side feels it is losing (which might be both sides at once - nuclear war would be a very confusing experience);

-wars elsewhere following the withdrawal of overburdened US forces, e.g. a Russian invasion of Eastern and Central Europe backed by the threat of nuclear attack, or an Israeli/Saudi/Emirati versus Iranian/Hezbollah war that destroys a substantial share of global oil production;

-economic catastrophe: a deep global depression; widespread blackouts; years of major famines and fuel shortages, leading to Sri Lanka type riots in dozens of countries at once, with little chance of multinational bailouts.

-substantial decline in efforts to treat/reduce/vaccinate against HIV, malaria, antibiotic resistant infections (e.g. XDR/MDR tuberculosis), COVID-19, etc.

If your simplified approach to international relations is more realist than anything else, you probably believe that a major factor in whether war breaks out over Taiwan is the credibility of US deterrence.

How much of EA works on preserving, or else improving, the status quo between the US and China, whether through enhancing the credibility of US deterrence (the probable realist approach) or anything else? Very little. Is that due solely to calculation of risk? Is it also because the issue doesn't seem tractable? If so, that should at least be regularly acknowledged. Could the average EA's attitude to politics be playing a role?

To the extent that the US-China war risk is discussed in EA, I do not think it is done with the subtle political awareness that you find in non-EA national security circles. Compare e.g. the discussions here (https://forum.effectivealtruism.org/topics/great-power-conflict) with the writing of someone like Tanner Greer (https://scholars-stage.org/) and those he links to.

In case you are wondering, I have no strong opinion on which US political party would be better at avoiding WW3. There are arguments for both, and I continue to weigh them, probably incompetently. I do think it would be better if there were plenty of EAs in both parties.

I have no meaningful thoughts on how to decide whether unaligned AI or WW3 is a bigger threat. (Despite 30-40 hours of reading about AI in the past few months, I still understand very little.)

Expand full comment
Jun 8, 2022Liked by Richard Y Chappell

I've read one alternative approach that is well written and made in good faith: Bruce Wydick's book "Shrewd Samaritan" [0].

It's a Christian perspective on doing good, and arrives at many conclusions that are similar to effective altruism. The main difference is an emphasis on "flourishing" in a more holistic way than what is typically done by a narrowly-focused effective charity like AMF. Wydick relates this to the Hebrew concept of Shalom, that is, holistic peace and wellbeing and blessing.

In practical terms, this means that Wydick more strongly (compared to, say, GiveWell) recommends interventions that focus on more than one aspect of wellbeing. For example, child sponsorships or graduation approaches, where poor people get an asset (cash or a cow or similar) plus the ability to save (e.g., a bank account) plus training.

I believe that these approaches fare pretty well when evaluated, and indeed there are some RCTs evaluating them [1]. These programs are more complex to evaluate, however, than programs that do one thing, like distributing bednets. That said, the rationale that "cash + saving + training > cash only" is intuitive to me, and so this might be an area where GiveWell/EA is a bit biased toward stuff that is more easily measurable.

[0]: https://www.goodreads.com/book/show/42772060-shrewd-samaritan

[1]: https://blog.brac.net/ultra-poor-graduation-the-strongest-case-so-far-for-why-financial-services-must-be-a-part-of-the-solution-to-extreme-poverty/

Expand full comment

I very much agree with the sentiments of this article. I've been super frustrated by many critics of EA. The criticisms of EA generally seem to involve just egregious reasoning. The template for many of them seems to be

1 Criticize a thing done by EA

2 Call EA names

3 Give an incredibly vague prescription that sounds nice but has no details, along the lines of, "So EA is right, we should reshape our giving. But not in a way that bolsters capital or focuses on ridiculous terminator scenarios or eliminates the heart of giving in favor of a beaurocratic, technocratic top down elitist approach to giving. Instead, we should invest in a community of care, with bottom up programs, that opts for radical reforms that help make the world better." It feels like a campaign ad. People seem to be unaware that they can be part of EA, while not giving to specific parts of EA that they find objectionable. If one is not a longtermist, they can still give to combat malaria and factory farming.

When a movement has saved hundreds of thousands of lives and improved the conditions of vast numbers of animals on factory farms, criticism of random hyper specific action is not sufficient to be a criticism of the movement as a whole--when that action is not a necessary condition of being part of the movement.

Expand full comment
Jun 7, 2022Liked by Richard Y Chappell

This line, "I think it’s now widely acknowledged that early EA was too narrowly focused on doing good with high certainty—as evidenced through RCTs or the like" made me think of the parallel issues with the "evidence-based medicine" movement. There's surely a lot of good that this movement has done, but there are also widely-accepted criticisms of it (pointing out that it often ignores evidence that doesn't come through RCTs, and that it focuses on statistical significance over effect size in things like the classification of carcinogens). And yet, I'm not aware of any particular competing movement.

Expand full comment
Jun 7, 2022Liked by Richard Y Chappell

Agreed, this is a major frustration when reading criticisms.

Would you consider the progress studies movement to be an alternative? They seem genuine in their belief that enabling conditions for scientific progress will alleviate a lot of suffering, and are going about it in a much different way from EA branded organizations.

Expand full comment

Pragmatic goals drift from altruistic goals quickly if altruistic values are employed in means selection. The result is that, to get anything done, compromises and acceptance of hypocrisy is common-place in any group identifying with pragmatic values.

How does one achieve a pragmatic goal? By adjusting one's values until one can use the means. Now hypocrisy seems to a requirement of pragmatism unless you stop claiming values that you ignore in your priority to achieve your goals,.

The EA community has the problem of identifying itself with altruism. It will always be vulnerable to criticism so long as it reaches pragmatic goals.

For example, if jobs that pay well are anti-altruistic but provide money for altruistic causes through private donations, should an EA person take one of those jobs? If they do, should they walk around feeling like a hypocrite? These are, in our society, personal choices, precisely because our society allows jobs that are anti-altruistic. In that case, what role does EA play in our society? Is it enabling the system of harm that high-paying jobs enable? etc, etc... Sure, there's some moral compromise somewhere along the line, but giving large chunks of your earnings to effective charities has obvious altruistic intention.

Still, if you want to annoy your critics, you could always stop using p**n, stop all drinking and drug use, become a vegan, use public transit, wear a sweater indoors on cold days, and opt to not have children. The criticisms will change, from self-serving heckling about your systemic corruption to self-serving heckling about your poor quality of life. The critics will adapt, but I don't see that any strong measurement or calculation of altruistic impact of your new choices was made.

EA should offer methods to quantify such personal lifestyle choices as having children, using alcohol, or eating meat. I think that would change the narrative on its critics substantially.

Expand full comment

I agree most criticisms of EA are bad.

But I think the majority of humanity can plausibly claim to be doing better than effective altruists, since most humans are Christian or Muslim. Effective altruists and consequentialists acknowledge the problem of infinity utility but don't really have a way to deal with. I think anyone who thinks they are following the most plausible path to infinite utility can legitimately claim to be doing a much better thing than EAs are, even if they think their religion is almost certainly wrong.

Most EAs seem to just ignore infinity or Pascal's Wager, or just declare it out of bounds but I don't think this is very principled.

" I’m dubious—it seems awfully fishy to just insist that one’s favoured form of not carefully aiming at the general good should somehow be expected to actually have the effect of best promoting the general good."

I laughed! i think this is a good point. Maybe in their defense we can see EAs that think a lot about politics or write a lot about politics have ended up just giving money to some conventional causes- criminal justice reform, animal welfare, and so on. I could see an activist saying EAs spend a ton of time to get to the obvious conclusion.

"I really think the great enemy here is not competing values or approaches so much as failing to act (sufficiently) on values at all."

Agreed.

Expand full comment

I see moral complacency on the EA side.

“I’m going to spend my time doing bad things and then cover my moral being by donating cash or a weekend here and there.”

Rather than:

“I’m going to do good things”

PITHY ASIDE, PLEASE IGNORE ——-

It’s ironic that beneficiaries of the academic institution (the worst offender…yes, even worse than hedge fund managers!) would be critical since it’s the best opportunity to offset their moral failings without having to sacrifice much.

ASIDE COMPLETE ———

The replacement is to do work that does good. Doing good means affecting how risk is distributed in society. You’re either assuming your fair share of risk, or you aren’t.

Expand full comment