81 Comments
Apr 23Liked by Richard Y Chappell

I believe you are most wrong about moral realism. I don't know exactly where you stand on every auxiliary issue, but:

(a) I don't think there are any good arguments for moral realism, and I think much of moral realism's appeal stems from misleading implications about the supposed practical consequences of antirealism. I think many of these misleading ways of framing antirealism are rhetorical in nature and rooted in biases and misunderstandings of antirealism, and that this is quite similar to how people critical of utilitarianism uniquely frame it as some kind of monstrous and absurd position.

(b) I don't think most nonphilosophers are moral realists or antirealists, and more generally I don't think moral realism enjoys any sort of presumption in its favor, e.g., I don't grant that it's "more intuitive" or that it does a better job of capturing how people are typically disposed to speak or think than antirealism (though I also don't think people are antirealists).

You ask what the most helpful or persuasive point I can make to start you down the right track may be. I don’t know for sure, since I am not super familiar with many of your background beliefs, but I’d start with this: I think there is little good reason to think that most nonphilosophers speak, think, or act like moral realists. Rather, I think moral realism is a position largely confined to academics and people influenced by academics. I think the questions about whether people are moral realists or not are empirical questions, and that the empirical data simply doesn’t support the notion that moral realism is a “commonsense” view. I don’t know where you stand on this issue, but I think it’s an important place to start.

I came to this conclusion after many years of specifically focusing on the psychology of metaethics and in particular the question of whether nonphilosophers are moral realists or not. Early studies suggested that most people would give realist responses to some questions about metaethics and antirealist responses to other questions. However, I came to question the methods used in these studies and launched a large project (which culminated in my dissertation) to evaluate how participants interpreted these questions. I came to the conclusion that most people were not interpreting them as researchers intended (and frequently didn’t interpret them as questions about metaethics at all). I suspect the best explanation for this is that ordinary people don’t have explicit stances or implicit commitments to metaethical theories, and that metaethics has very little to do with ordinary moral thought and language. The case for this is largely derived from my own data and my critiques and analyses of research on this topic. It’d be very difficult to summarize it but I could elaborate on specific points.

The more general takeaway is that I don’t think moral realism enjoys any special kind of intuitive priority, and I suspect that the reason why some people are disposed towards moral realism has more to do with path dependent idiosyncrasies in their particular cultural backgrounds and educations.

Expand full comment
Apr 23·edited Apr 23Liked by Richard Y Chappell

I'll preface this by saying I'm a new reader, so if you have written on this topic elsewhere, I apologize. Can you explain to me how utilitarians think about utility and preferences?

It is intuitive to me that an individual can have a complete and transitive preference ordering across the infinite possible states of the world. But from Arrow, we know that you cannot aggregate individual preferences into a social preference ranking without violating unacceptable conditions. So in order to determine the "greatest good for the greatest number of people," I think you have to accept that cardinal utility exists.

Unlike rankings (ordinal utility), I find the notion of cardinal utility, in the sense of preferring world-state X over world-state Y by some amount z percent, much less intuitive. I suppose you might be able to deduce your own cardinal utility over world states by ranking gambles across states, but I don't know how this can be applied across people.

For a concrete concern, what determines the pleasure and pain scale? Suppose a painless death, or the state of non-existence (absent considerations of an afterlife), is assigned zero. Then a slightly painful death might be assigned negative one, which is infinitely(?) worse than a painless death. A slightly more painful death is negative two, which is twice(?) as bad as a slightly painful death. I suppose a state of infinite pain could be assigned zero, but that is problematic because a state of infinite pain doesn't exist, in the sense that there can always be a worse state of pain.

This is less an objection and more an expression of my own curiosity over how utilitarians think about this stuff.

Expand full comment
Apr 23Liked by Richard Y Chappell

Hm, this is difficult as I probably disagree with you on almost everything. I guess I’ll start with moral realism. I always found the evidence (intuitions, progress, convergence) pretty weak and easy to socio-evolutionarily explain with anti-realism then winning out on parsimony grounds. What would you say is the best case for moral realism?

Expand full comment
Apr 23Liked by Richard Y Chappell

Thanks for this post. I am pretty much in agreement with all you wrote. I am not an academic, so I don't know if my thoughts will be as well honed as what you are looking for here. I hope they're still worthwhile to you.

The main question I had when reading your post was about your statement that reasoned inquiry is the best way to get to the truth. In absolute terms, I agree, but there is always someone with more mental capacity (i.e. more intelligent), more moral capacity (i.e. more sensitive to suffering) than me, and who has worked through their own emotional biases to do better moral reasoning than me. So I think that to a large extent the moral questions of life come down to who do I trust for guidance? I think that addressing this question is worthwhile. Do you have a set of criteria?

Currently, I judge someone holistically by how close their thinking and behavior approximates with behavior that I already have come to accept as moral. For example: Do they have faith based beliefs? That would be a con. Do they parent well? That would be a pro. Are they vegan? That would be a pro.

I'm curious if you could address heuristics that you use for this process.

Expand full comment

A few things that I disagree with you about, though I think you know about most of these things:

1) I'm confused as to why you don't take theism at all seriously. Yes the problem of evil is a big problem, but you compare that to the evidence for--fine-tuning, the existence of psychophysical laws, the anthropic stuff, the fact there are laws at all--and it's hard to be super confident that these together aren't enough to outweigh the problem of evil. I was making a spreadsheet where I had rough numbers about the Bayesian force of various considerations, and even when thinking that the POE favored atheism at 100,000 to 1, theism won out overall.

2) I think the case for hedonism about well-being is very compelling based on lopsided lives and that your objection plainly fails https://benthams.substack.com/p/lopsided-lives-a-deep-dive (I know that's a big topic so no need to reinvestigate it, but you did ask for disagreements).

3) Hmm...other than that you're basically correct about everything (except your villainous and dastardly tentative support for halfing in sleeping beauty).

Expand full comment
Apr 23Liked by Richard Y Chappell

Nice article. You're generally very reasonable and I appreciate your tone/open-mindedness.

I think that some forms of utilitarianism have some major counterintuitive implications like the double-or-nothing gamble and repugnant conclusion type implications.

Now, granted--I don't have a good answer to these. But these concern me because some of the infinite ethics, big gamble, population ethics type questions point toward fanaticism and a very different ethical orientation. For example:

1. If maxipok is the true path, it undermines a bunch of other ethical ideas. What if the average person increases x-risk? What if economic wealth and prosperity increase x-risk?

2. If meat eaters produce way more harm than good, it undermines a bunch of ethical perspectives about saving lives. Maybe even existing as a vegan causes a bunch of harm to animals. Do we want more people living happy lives if they hurt animals so much its a net negative? What does that say about the moral value of humanity as a whole?

Maybe it's best to avoid seriously considering such questions because you look like a crazy fanatic, but it's plausible to me that what we should be doing ethically could be way way different from what EA is doing.

Again, I don't have a really good answer, but if you said "how could Richard Chappell be maximally wrong?" it would be about one or two assumptions that flip our ethical world upside down. I don't know which ones. I don't know how to deal with these questions.

I've taken to comfort in focusing more narrowly on questions like "If a couple is going to pick an embryo, is it ethical to pick the one you expect to live the best life?" cause the big questions are so hard.

Expand full comment

I am afraid I may not have the time to engage in a sustained back-and-forth on this point, but it seems to be the kind of thing you're inviting, so let me make a stab: while I more or less accept the three principles you've listed above, I would say that I do generally reject beneficentrism as you've defined it. I may misunderstand your definition, but I reject utilitarianism as an ethical theory.

I suppose the beneficentrism part hinges on what we mean by "general" welfare. If you simply mean *net* welfare - so that your project should be to make sure that you do more good than harm to whatever number of individuals you affect - it doesn't bother me so much. But I do have a problem with a view that sees us as having to promote "general" welfare in any broader sense of maximization: that everyone's life projects should include promoting the welfare of their polity or their world as a whole, in ways that involve benefitting as many people as possible (or providing the largest total benefit).

Most people in history have not held such a maximizing view, and it's not clear to me why they should. Instead we accept a relatively strong partialist account, in which one is obligated to promote the welfare of those one is directly engaged with - co-workers, family, friends, fellow organization members, maybe neighbours - but going beyond that is supererogatory. (Beyond that circle there are *harms* that one is obligated not to cause, but harm and benefit are not symmetrical.)

I think the case for this view (or contrariwise for utilitarianism) goes down to deep foundations, possibly including internalism vs. externalism on moral motivation. But an old blog post of mine lays out a starting position:

https://loveofallwisdom.com/blog/2015/01/of-drowning-children-near-and-far-ii/

Expand full comment
Apr 26Liked by Richard Y Chappell

Hi! This isn't about something you're necessarily wrong about: I'm not sure and you may very well be right and I'm wrong. But I think you missed an important consideration in your New Paradox of Deontology (https://rychappell.substack.com/p/a-new-paradox-of-deontology) that can make rejecting 4 reasonable.

To start, I'd like to reformulate your thought experiment somewhat to bring it more in line with scenarios that motivated premise 1 in the first place:

New Organ Harvesting: a doctor has 5 patients, who are themselves victims of attempted murder, in need of organs. Coincidentally, they also have a healthy patient whose organs can save the five victims. The doctor decides to to murder the healthy patient to save the other five. How strongly you should hope their attempt to save the other five succeeds?

(Note: this isn't the only interpretation of your thought experiment; perhaps the protagonist can prevent the murder attempts in the first place. But let's stick with this one for now.)

I like this formulation because it actually illustrates what using people as means entails: I find it difficult to reason about situations where "using people as means" is just stipulated, because it does not actually bring out the relevant intuitions.

Now, I argue, given the illegitimate method through which the doctor saves lives, it can be reasonable to actually discount those benefits of the act as "tainted" by the murder of another person. Consider:

Saved by Organ Harvesting: you are a victim of a murder attempt, successfully saved by the doctor in the scenario above, taking the organ of the murdered patient. How thankful you should be for this?

Versus:

Saved by Organ Donation: you are a victim of a murder attempt. You are saved by the organ of another person, who signed up for organ donation and died in a car crash. How thankful you should be for this?

It seems to me that it's totally reasonable to not feel very thankful in the first scenario, and indeed to feel that the attempt to save you is "tainted" by using the healthy patient's organ. In contrast, I think you should feel very thankful in the second scenario, albeit sad at another person's untimely death.

This doesn't mean we should discount the benefits of saving people to zero. But perhaps some degree of discounting, such that the difference between Successful and Failed Prevention is about as bad as a generic killing, is justified.

Now, I'm actually not sure it's possible to coherently accommodate the discounting intuition. Presumably, the discount should be applied multiplicatively to the benefits of the action achieved through evil means. Presumably, it is applied only to the expected benefits of the action, not all future consequences: it seems like your future joy matters just as much even if you are saved by organ harvesting. Maybe just those assumptions are unsustainable.

What do you think?

Expand full comment
Apr 23Liked by Richard Y Chappell

Not a professional philosopher, and not something that I think you're wrong about, but something I often think about in your posts disputing deontology that I'd be interested to hear more about:

In your ethical theory vs practice post, and the linked utilitarianism.net article on rights-based objections, you discuss the case of the doctor who murders a patient to save 5. The argument as I understand it goes: for prudential reasons, the doctor is highly unlikely to be correct about the consequences of their actions; in particular taking into account the likely reaction of others, the murder is probably net negative.

But this has always struck me as rather pat, since there's no reason we have to take others' reaction to the doctor as fixed. After all, you could make similar objections to other (imo obviously correct) utilitarian arguments, for example "eliminating slavery might be net negative, not least when you consider the reaction of current slaveholders".

I think the obvious thing for a 19thC utilitarian faced with that argument to think would be: we should promote the utilitarian worldview until sufficiently many people agree with it sufficiently to no longer react negatively in a way that would negate the benefits of ending slavery.

So, I think you're still faced with the question, are people _wrong_ to react negatively to the doctor? Would it be better to convince people to accept murdering doctors? Even if the consequences of a doctor murdering a patient for their organs is very negative _now_ should utilitarians be working toward a world where it isn't? Where the doctor is recognized as a humanitarian?

And if not, why not? Simply practical reasons, such as the difficulty of actually convincing people?

More broadly, it feels like utilitarianism feels correct to me when I think on the margin, but when I think about enacting broad value change, I think there a bunch of different options that utilitarianism might recommend, some of which seem fine, some bizarre, and some horrifying. Do you think deontological ideas, or ideas from other ethical frameworks, have a role to play in deciding between different Schelling points far away from our current equilibrium?

Expand full comment

God and Hedonistic Act Utilitarianism. What Matthew Adelstein just said. Also, Simon Rosenqvist is fantastic - https://philpapers.org/rec/ROSHAU

I believe that God exists and I believe that Hedonistic Act Utilitarianism [or Classical (mostly Benthamite) Utilitarianism] is true.

Expand full comment
Apr 23Liked by Richard Y Chappell

I haven't yet gone through _all_ of your writing, but one obvious point is "why sentient not sapient". I.e. why should we weigh, e.g., animals, fetera, infants, or critically-destroyed dementia patients the same way as we weigh sapient, capable-of-thinking beings.

Expand full comment

"Here’s a recent example."

The example asserts that a specific comment of yours is a "dishonest way to describe" something and that specific comments instantiates "the ideological fanatic mindset". That's not saying that you the person is generally dishonest or generally behave like an ideological fanatic.

Your subsequent reply in that thread expresses a norm against proclaiming to know better than another person what that person thinks, yet you immediately break that norm by claiming to know (mistakenly) what I think. I never said or thought that what you wrote in that thread was "the full extent of your thoughts"; that I had a claim to your time; that I knew what you'd "considered at length". My comments were about the arguments written in that comment thread. The thread was prompted by you claiming (mistakenly) to know that those holding a different view "feel nothing" - another example of you breaking the norm you later expressed.

Since then I've read other texts by you here and on the pro utilitarian website. My objections to your view were not covered or answered. I will move my comments to a place where discussion of those kinds of objections to your view can proceed openly, something all available evidence indicactes can't happen here.

Expand full comment