39 Comments
Feb 23, 2023Liked by Richard Y Chappell

Do you only endorse agreements behind the veil of ignorance that "markedly" improve people's prospects? Under veil of ignorance reasoning, shouldn't you also endorse killing one when there is, e.g 25% chance of saving five, since this would improve everyone's chances of survival ex ante (though not "markedly")?

Expand full comment
Mar 24, 2023·edited Mar 24, 2023Liked by Richard Y Chappell

I find that veil of ignorance arguments against deontology to be problematic. First, when you say "From behind a veil of ignorance, everyone in the situation would rationally endorse killing one to save five", I'm going to assume that by "rational", you are just referring to what there is most self-interested reason to do. In that case, it's not clear why our moral obligations are determined by our self-interested reasons in this way. But more importantly, I think this kind of veil of ignorance styled reasoning implies a strict kind of Utilitarianism, which you've objected to in the past.

For example, from behind the veil of ignorance, a self-interested agent would only want to maximize future prospects for well-being. He wouldn't care about whether the well-being was deserved or not. So from behind the veil of ignorance, self-interested parties would not select principles that give any _intrinsic_ weight to desert (of course, they might give some _instrumental_ weight to desert). But you've previously argued in favor of incorporating facts about desert in our moral reasoning https://www.philosophyetc.net/2021/03/three-dogmas-of-utilitarianism.html. E.g. you say that the interests of the non-innocent should be liable to be discounted. Why would purely self-interested parties care about desert from behind the veil of ignorance?

One answer might be that fully rational agents are also fully moral, and fully moral agents would care about desert because desert is morally relevant. In that case, it's not clear why a deontologist wouldn't also say that fully rational/moral agents would care about rights because rights are morally relevant.

For another example, I don't see why self-interested parties would distinguish between principles that kill persons vs principles that fail to create persons. From the perspective of the agent behind the veil of ignorance, failing to be created is just as much of a loss as being killed. Thus, I would imagine that the self-interested parties would be indifferent between the following two worlds:

* world A: N people live long enough to acquire X utility

* world B: N people live long enough to acquire X/2 utility before they are killed and replaced with another N people who live long enough to acquire X/2 utility.

You've argued elsewhere that the strongest objection to total utilitarianism is that it risks collapsing the distinction between killing and failing to create life. But why would self-interested parties from behind the veil of ignorance care about this distinction?

So while it is plausible that fully rational agents from behind the veil of ignorance would not care about rights, it is equally plausible that they would not care about desert, the distinction between killing vs failing to create, the distinction between person-directed vs undirected reasons, special obligations, etc. So it seems like veil of ignorance style reasoning leads to strict total utilitarianism.

Expand full comment
Feb 24, 2023Liked by Richard Y Chappell

What's the rationale for P1 of the teleological argument? For the man in Bernard Williams' case, *that's my wife* is a reason to save her (rather than the other drowning person). How does that reason come from applying instrumental rationality to the correct moral goals?

Expand full comment
Feb 23, 2023·edited Feb 23, 2023Liked by Richard Y Chappell

Richard, have you considered writing a book arguing for consequentialism? Also, I'd recommend reading the suitcase paper in full--it provides one of the best criticisms of deontology I've ever read--much better than was provided in my article.

Expand full comment

Great post! I've also been thinking about arguments for utilitarianism along the lines of your pre-commitment argument.

Expand full comment

Well I'm convinced!

One worry I have about the status quo bias argument is that it doesn't explain our more specific intuitions--e.g. why we think you should disrupt the status quo and flip the switch but not push the person.

I also really like the other arguments on utilitarianism.net--especially the point that non-consequentialism has to hold it's sometimes bad to put perfect people in charge of things.

I also think your preference paradox is very compelling, as well as this argument. https://benthams.substack.com/p/wrong-to-do-and-prevent-a-new-problem

Expand full comment
Feb 23, 2023Liked by Richard Y Chappell

"From behind a veil of ignorance, everyone in the situation would rationally endorse killing one to save five, since that markedly increases their chances of survival." Why doesn't this just beg the question since it assumes that all that matters morally is whether I survive (or whether the most survive)? Deontologists don't think, and have never thought, that morality is merely about the ends. They think, roughly, that how we get there matters too. So they'll just deny this. (In particular, they'll deny the "since....".) They'll also say, presumably, that it's rational to deny to it, since it's rational to care about all the morally important stuff (which includes means as well as ends). And so on.

This has the feel of an argument that bolsters the utilitarian's confidence in their own view without having hope of convincing someone who didn't already agree in the first place.

Expand full comment
Feb 23, 2023Liked by Richard Y Chappell

For argument 1, is there someone reason to believe that pre-commitment from behind a veil-of-ignorance would always accord with consequentialist reasoning over deontological? Or is it just a property of these many-vs-few cases, where deontology is worried about violating the rights of the few? I mean, I agree with the argument in this specific case, but could a deontologist cook up versions of argument 1 that pull the other way?

Expand full comment

As for the master argument, I'd imagine that the deontologist would dispute both 2 and 3. As for 2, she'd might say that she thinks consequentialism must deny various fundamental principles like

--people have rights

--people are separate in a magical and ineffable sense which somehow means that utilitarianism is wrong :)

--you shouldn't kill one person to save a few others.

--intent matters to the significance of actions.

I think that figuring out whether that's true will involve disputing the various principles to which they appeal. The important response is that these are mostly justified based on our intuitions about cases, which are not as reliable as principles, as both you and I have argued at various points.

I don't find 3 that convincing to be honest. I think the deontologist would just say that while utilitarianism can explain away the linguistic intuition that organ harvesting is wrong or that ideal agents wouldn't do it in most circumstances as it is reckless, it can't explain the intuition that organ harvesting really is wrong--that it really shouldn't be done--that one has decisive reason not to do it. Most people have the intuition, I think, that even with perfectly ideal information, you shouldn't kill one person to save 5.

Expand full comment

To the first point, if people had a decisive reason to pre-commit their conditional consent to be killed without deontological constraints, then we would expect many utilitarians contracts (say organ harvesting contracts, especially given declining, yet with uncertainty, health with age). However, we have no such contracts. Rather, every known contract has deontological constraints, which makes it seem that the utilitarian reason isn’t decisive.

I make the claim that utilitarianism’s reasonable rejectability among free agents, evidenced by the absence of pure utilitarian contracts, disqualifies it as an account of morality, given morality’s “acceptance” condition here:

https://neonomos.substack.com/p/what-isnt-morality

Expand full comment