22 Comments

I think this distinction sounds like its missing the point.

The ethics question that is imlicit is "how should people reason when it comes to moral questions ". So if you say that you are a utilitarian but you don't reason in a utilitarian way then you seem to have changed the target of the conversation.

Expand full comment
author

I strongly disagree, for two reasons.

Firstly, it confuses moral theories and decision procedures. Ethical theories simply aren't, in the first instance, theories of "how people should reason". They're usually characterized as theories about *what makes an action right or wrong*; my (somewhat heterodox) view is that they would do better to be framed around the question of *what is fundamentally most important, or worth caring about*. But either way, these fundamental questions of ethical theory are very different from the practical question you raise. If you come to ethical theory thinking that the rival views are answering that practical question, you will come away badly confused.

As a theory, utilitarianism has *implications* for which decision procedure you should use. It says you should adopt whatever decision procedure is such that *your adopting it (i.e. reasoning in that way) would have the best consequences*. Which particular decision procedure actually meets this description is an empirical question, not a philosophical one. (Note that any theory X need not recommend reasoning in an X-ish way. For example, if the fate of the world depended upon your adopting an irrational decision procedure, any sane moral theory will agree that you ought to do precisely that -- make yourself irrational, if you can, in order to save the world.)

Secondly, my whole point is that *you don't know* what it is to "reason in a utilitarian way". To answer that, you would need to combine utilitarian goals with a theory of *instrumental rationality*. Many people ASSUME naive instrumentalism, without even realizing it. They have a picture in mind that actually involves the combination of utilitarianism + naive instrumentalism, but *mis-label* this picture as "reasoning in a utilitarian way". The whole point of my post was to explain why this is a mistake.

True "reasoning in a utilitarian way" involves combining utilitarian goals with whatever is the *correct* theory of instrumental rationality. I've offered some indications of what I think this involves. The precise details are open to question. But I'm pretty sure that naive instrumentalism is not the right answer, and so the caricatured understanding of "utilitarian reasoning" is actually an outright misconception.

Expand full comment
May 1Liked by Richard Y Chappell

This is a great article. It seems people constantly use naive utilitarianism as an argument against utilitarianism which just seems wild to me. Even on the basis of an arguing that is self effacing is bad (which seems to be true), utilitarianism arguably isn’t even self effacing some of the time. If utilitarianism is defined as “make the outcome with most most net positive utility result” as opposed to “take the action that will result in the most net positive utility,” it’s not even telling you to use a different theory when the practical implication would be to not always think about what’s actually going to maximize utility.

Expand full comment
Apr 30·edited Apr 30Liked by Richard Y Chappell

"This latter point is broadly under-theorized. Decision theory provides a kind of ideal theory of instrumental rationality, applicable to cognitively unlimited and unbiased angels, perhaps."

There has been some work on this kind of thing, though I haven't explored it yet:

https://easley.economics.cornell.edu/docs/Constructive.Decision.Theory.pdf

https://mkcamara.github.io/ctc.pdf

Expand full comment
Apr 30·edited Apr 30

Would the following scenarios affect what is naive and what is a wisely principled procedure?

1) Suppose we place the same moral weight on ants and people (in some sense).

2) What if society becomes majority utilitarian? [Of course, utilitarians will still be selfish, but lying won't necessarily be frowned upon if it was for the greater good.]

* Note there are 2 million ants for every person.

Expand full comment
author
Apr 30·edited Apr 30Author

Can you expand upon what you have in mind? Like, a concrete example of something that's currently naive but might, in the specified scenarios, instead qualify as wisely principled?

(My default assumption is that the answer is 'no'. Social norms and expectations might change what even qualifies as (deceitful) *lying*, as opposed to mere "white lie"-style expected untruths. But I think rational utilitarians should generally value truthfulness *at least* as highly as most people do -- maybe more.)

Expand full comment
Apr 30·edited Apr 30Liked by Richard Y Chappell

Focus on the ants scenario.

Suppose you're a smart high schooler from a third world country with very limited opportunities. Should you cheat slightly on the SAT (if you could without getting caught) to get into a good university scholarship so that you can put yourself in a position of more career power to better help ants?

Or if you're a middle schooler from a village. Should you cheat to get into a high school in the city. If you don't, you'll probably be stuck with low education because the school in your village is bad.

The kids you're displacing don't care about ants.

Expand full comment
author

Seems broadly similar to a normal case involving ordinary altruism rather than ants? I guess the thought is meant to be that the stakes are much higher, in part because your values are so much rarer / more neglected. But then the higher stakes make it all the more important that you not get caught cheating (in real life you can't *stipulate* future results, e.g. that you won't get caught, or that "cheating slightly" is necessary or would make any difference to whether you get a good scholarship).

Just as in the ordinary case, there might be special circumstances in which you'd do well to cheat, but it doesn't seem generally likely to be good career advice for smart high-schoolers (incl. from third world countries with very limited opportunities). And again, the higher stakes just make it all the more important that they follow actually-good career advice.

Expand full comment
Apr 30·edited Apr 30Liked by Richard Y Chappell

" the stakes are much higher, in part because your values are so much rarer / more neglected." Yes.

"And again, the higher stakes just make it all the more important that they follow actually-good career advice."

Not sure about that. The reward function may have a threshold shape, or at least be nonlinear. If the only way to affect change on ants is to become super successful at something, then a non-stellar level of career success (which would result from not cheating) is perhaps of little use. If you get caught, you're not much worse off than if you hadn't tried (relative to the stakes), but if you succeed, the payoff could be huge. Anyways, that's the idea of what a good scenario would be cooked up to do.

Expand full comment
author

Yeah, that's possible. I think it's unusual to be in such an asymmetric (high upside / low downside from cheating) situation, and it's important to pay attention to possible downsides (just look at all the broader reputational harm caused by SBF's fraud). But I'm not claiming that norm-breaking could *never* be rationally justified. Just that I think we should start from a very strong pro-social/co-operative disposition, and not be *easily* swayed into norm-breaking defection based just on very speculative/unreliable reasons for thinking oneself to be the rare exception to the rule.

Expand full comment
May 1·edited May 1

I tend to agree. But my point that how often you ought to be swayed may depend on if you adopt radical hedonistic utilitarian values such as extreme concern for animals or the exchangeability of current people with "merely possible" people.

Expand full comment

If you say being a "principled" defender of X only commits you to having some principle Y which causes you to defend X, then how could someone even be a non-principled defender of X? All defenders of X either defend X because of the principle X, or because of a different principle or set of principles Y. Clearly, if you defend X because of the principle X, you are said to defend X "in principle," while if you defend X because of an alternative principle Y, you are said to defend X "not in principle."

An objection to this would be that someone might defend X because of Y where Y is self-interest or flipping a coin. So a "principled" defender of X commits you to having some non-random moral principle, X or Y, on which you defend X. But I think my description above better matches common usage.

For example, I think it is totally fair to say utilitarians have "no principled objection" to slavery. I suppose they do have *a* principled objection, but the principle on which they object is not slavery itself, which is obviously what the statement, "you have no principled objection to slavery," is actually supposed to convey.

Maybe the semantics are boring or irrelevant, but the implications are not. When you say "there’s little reason for non-theorists to care about this further, purely theoretical matter," I think this is incorrect. Using the given example of free speech, most people on earth do not, in fact, think that free speech norms are "more conducive to moral progress and overall well-being than any realistic alternative." So whether you support free speech on utilitarian or deontological grounds will alter your support of free speech, should you become convinced of the majority-held view.

In general, I think saying it doesn't matter whether you support X on instrumental or non-instrumental grounds is incorrect when the instrumental value of X is controversial.

Expand full comment
author

It's very common for people to make unprincipled appeals to free speech, i.e. just when it is advantageous to "their side". That they are "unprincipled" about it is revealed by the fact that they won't defend the free speech of people they disagree with.

Expand full comment
author

On the further implications: it's true that the grounds of one's support for X will influence *what you'd have to change your mind about* in order to cease supporting X. But I don't know that there's any particular reason to expect instrumentally-based support to be noticeably less robust (or more likely to be subsequently abandoned) than non-instrumentally-based support.

Compare: most people on earth do not, in fact, think that there are non-instrumental reasons to support free speech. So whether you support free speech on instrumental or non-instrumental grounds will alter your support of free speech, should you become convinced of the majority-held view *about the lack of an adequate non-instrumental basis*.

I guess if you thought it was normatively overdetermined -- that there were *both* instrumental and non-instrumental reasons -- that would be the *most* robust position, least susceptible to being overturned by later changes of mind. But I still don't think there's *much* reason for most people to care about any of this, because I expect *either* basis is sufficiently robust in practice. It seems very rare (as far as I can tell) for people to change their minds about these sorts of things.

Expand full comment

Judging by recent Rasmussen poll results, naive instrumentalism is far more prevalent among the politically-engaged elite than in the adult US population at large.

Asked whether they would prefer for the political candidates they favor to win election by cheating than lose, 35% of "the elite one percent" -- i.e., people with postgraduate degrees and annual incomes of more than $150,000 living in densely-populated urban areas -- answered "yes," versus only 7% of other poll respondents. And among a subset of the elite one percent who said that they talk about politics every day (a question that only 8% of "non-elite" respondents answered in the affirmative) 69% said that they'd prefer for the candidates they favor to win through cheating than for them to lose(!)

https://www.rmgresearch.com/wp-content/uploads/2024/01/Elite-One-Percent.pdf

https://twitter.com/RobertBluey/status/1770789411568910756

Expand full comment
author

Interesting. Though if someone is politically disengaged or apathetic, any naive instrumentalist dispositions they have wouldn't show up on specifically political questions. You'd need to ask about norm-violating means of achieving something else that they cared about more.

Expand full comment
Apr 30·edited Apr 30Liked by Richard Y Chappell

A fair point. In light of which I'll modify my take on those poll results: they simply indicate that naive instrumentalism is rife among the politically-engaged elite in the US. Which is dismaying but not surprising.

Expand full comment
May 1Liked by Richard Y Chappell

I would simply prefer to doubt the survey methodology. I love slides 10 and 22. Re slide 25, it would be interesting to see what proportion of those responding to that particular question would agree that Trump was not re-elected because of cheating by his opponents.

Expand full comment

All I know about the specific survey question at issue here -- aside from the fact that it was conducted by Scott Rasmussen, the founder and former president of Rasmussen reports, a Balletopedia editor-at-large, and FWIW a co-founder of ESPN -- is that responses were received from 1000 participants. What reason do you have to mistrust the result?

Expand full comment

Hi William.

I don't want to get into a long discussion on Richard Chappell's website about this, as I see it as peripheral. I am quite familiar with surveys and statistical methods, but had not heard of Rasmussen previously. A quick search did find a couple of recent articles that further cemented my doubts:

https://www.washingtonpost.com/politics/2024/03/08/rasmussen-538-polling/

which claims

"A few weeks later, Rasmussen again published dubious poll results on behalf of a right-wing organization. This time, the findings alleged to have uncovered rampant fraud in 2020, including that 1 in 12 Americans had been offered “pay” or a “reward” for their vote. Trump and his allies celebrated the poll; again, the results do not comport with the reality of there being no demonstrable wide-scale vote-buying scheme at the state or national level."

This suggests to me that there are some problems in either the wording of questions or the sampling Rasmussen uses.

Nate Silver, in his blog comments,

https://www.natesilver.net/p/polling-averages-shouldnt-be-political

suggests only that "Rasmussen has indeed had strongly Republican-leaning results relative to the consensus for many years. Despite that strong Republican house effect, however, they’ve had roughly average accuracy overall because polls have considerably understated Republican performance in several recent elections...", and argues that ABC not drop Rasmussen's polls from the 538 site's poll averaging. But he did criticize Rasmussen Reports ten years ago for their automated telephone sampling methods.

More generally, there is a moderate literature (mainly from business schools!) on the ethical behaviour of atheists, which reminded me of this presentation by Rasmussen.

Expand full comment

Unlike you, I claim no expertise re public polling technique or statistical methods, but I've had some prior familiarity with Philip Bump's punditry, from which I've gathered that he's a partisan hack. His critique of Rasmussen's contentions about the prevalence of various sorts of hanky-panky by mail voters in 2020 struck me as essentially question-begging, and through hasty Googling I found a cogent rebuttal in this blogpost: https://ethicsalarms.com/2023/12/13/confirmation-bias-test-the-rasmussen-2020-voter-fraud-survey/

Expand full comment