Naïve Instrumentalism vs Principled Proceduralism
Not your standard consequentialism-deontology distinction
Naïve Instrumentalists are practically unconstrained in pursuit of their moral or political goals. If it seems to them, just based on the immediately legible evidence, that violence or deception would advance their goals, they won’t hesitate to act accordingly.
Principled Proceduralists, by contrast, allow their instrumental pursuits to be practically constrained by rules, principles, or procedures that promote co-operation and limit downside risk (incl. of escalating conflict) in a way that can be appreciated somewhat independently of their particular beliefs or commitments.
Now, if all you know about a person is that they’re either a naïve instrumentalist or a principled proceduralist, which option would you expect to be better for the world? Which do you take to be recommended by consequentialism? There’s a funny tradition of objecting to consequentialism by offering different answers to these two questions, which seems pretty incoherent.
Many people seem to associate consequentialism with naïve instrumentalism. I’ve always found this ironic, because consequentialist philosophers more than anyone else have written at length about why naïve instrumentalism is bad and irrational. (In short: it neglects higher-order evidence. Given familiar human biases, we have strong higher-order evidence that our first-order judgments on certain topics are less reliable than just sticking to “tried and true” moral rules. That is, following the rules actually has higher expected value, all things considered.)
Moreover, as Scott Alexander points out in ‘Less Utilitarian Than Thou’, non-utilitarians often seem far more open in practice to common forms of naïve instrumentalism, i.e. doing bad things for a putative “greater good” (typically, advancing their political ideology). Arch-liberal J.S. Mill was no aberration: a principled concern for the impartial good fits very naturally with liberal proceduralist commitments.
So I think the standard narrative here is quite badly confused. It may help to separately step through how we should think about (i) being principled; and (ii) what consequentialism really claims. There seem confusions in common ways of thinking about both.
Two Conceptions of Principle
What does it take to be a principled defender of, say, free speech? Distinguish two very different answers:
(1) To robustly support practical norms of free speech—that is, without pausing to assess, in any given case, whether you personally approve of what is being said.
(2) To hold the deontological theoretical belief that free speech norms have a non-instrumental justification.
These are importantly different, because you could robustly support free speech and inquiry on the (Millian) instrumental grounds that these norms seem more conducive to moral progress and overall well-being than any realistic alternative.1
I’d say the first answer—having a robust practical commitment to free speech—gets at what is practically important. We can always ask further, secondary questions about the basis of one’s principled commitment: whether it’s ultimately instrumental or non-instrumental, for example. But there’s little reason for non-theorists to care about this further, purely theoretical matter. Illustrating this, it would be very strange to deny that J.S. Mill was a principled defender of free speech, as the second (excessively theoretical) conception does.
I always think of this when people complain that utilitarians have “no principled objection” to slavery. Do they not think that slavery is robustly detrimental to human well-being? Do they not think that there’s anything principled about robustly opposing practices that are so harmful? Perhaps one can imagine an absurd scenario involving “happy slaves” to which the usual utilitarian objection would no longer apply.2 But it’s awfully misleading to infer from this that we have “no principled objection” to real-world slavery. You might as well claim that commonsense morality, in allowing a hypothetical surgery technique involving nanobot bullets, has no principled objection to shooting people. There is a true thought somewhere in this vicinity,3 but unless it is very carefully explained, that probably isn’t the thought that will actually get communicated to the typical reader. It certainly shouldn’t be our default way of talking about “principled” objections and commitments in applied ethics.
Two Conceptions of Consequentialism
When non-consequentialists think about consequentialism, they focus on its putative account of right action (“an act is right iff it maximizes (expected) value”). Many then implicitly assume naive instrumentalism and so infer that a rational consequentialist agent would go about blindly following their first-pass expected-value calculations.
This is really daft. But to fully grasp the error here, it helps to get clearer on some fundamentals of ethical meta-theory (i.e., theorizing about ethical theory).
As I explain in ‘Ethical Theory and Practice’, ethical theories are in the business of telling us what fundamentally matters. (The consequentialist answer is various good things—presumably including well-being—and that’s all: no special moralizing or treating life or agency as sacred.)
To get practical advice, the account of what matters (i.e. the morally correct goals or concerns) needs to be combined with an account of instrumental rationality (i.e. how an agent should seek to achieve the correct goals).
This latter point is broadly under-theorized. Decision theory provides a kind of ideal theory of instrumental rationality, applicable to cognitively unlimited and unbiased angels, perhaps. But I trust that nobody really thinks it is instrumentally rational for humans to go around constantly calculating expected utilities. (That is to say: we all recognize that naïve instrumentalism is irrational.) Humans are non-ideal agents, and accordingly require a non-ideal theory of instrumental rationality—a theory that’s fit for human-sized minds. I develop a rough picture of what I think this would look like in section 5 of my 2019 paper, ‘Fittingness Objections to Consequentialism’ (drawing especially on Pettit & Brennan’s brilliant 1986 ‘Restrictive Consequentialism’, along with general insights from the heuristics and biases literature). A simplified version is offered in the practical ethics chapter of utilitarianism.net.
But the main thing I want to emphasize for now is where naïve instrumentalism would enter the picture. It isn’t part of the core consequentialist moral theory, specifying what matters. Rather, naïve instrumentalism is a false theory of instrumental rationality that critics ignorantly associate with consequentialism.
Remember this the next time you see someone reference “naïve utilitarianism”. Remember, especially, that the “naïveté” is entirely orthogonal to the “utilitarianism”. Naïve instrumentalism is a false theory of instrumental rationality. Utilitarianism specifies that the moral goal is to maximize well-being. It’s possible to combine these two entirely separate views, and the result will be bad. But there’s no particular philosophical impetus to combine them. It’s not an especially “natural” combination of views, except in the brute psychological sense that many misguided individuals happen to believe that they go together.
If more people read this post, hopefully that brute error can be further reduced.
Indeed, this seems like the best justification for them. Naïve radicals are surely right that it would be objectionable to prioritize merely procedural justice over substantive justice. The only truly reasonable basis for proceduralism is faith that this is the most effective means to securing substantive justice in the long run.
Though a welfare objectivist might still think that a lack of autonomy makes one worse off. Conversely, does anyone really think that there is no conceivable situation in which the usual moral valence could be flipped?
For many bad things, there is a deeper explanation of why they are bad. Whenever something’s badness admits of such a deeper explanation, you might say that it is not itself among the “fundamental” bads, and so it should be possible to imagine a weird case in which a thing of this kind lacks all its usual deeper bads, and so is not bad at all. Low decouplers confuse this theoretical claim with the practical claim that you’re not robustly opposed to actual things of this kind, or that you don’t really regard them as (even derivatively) bad at all.
I think this distinction sounds like its missing the point.
The ethics question that is imlicit is "how should people reason when it comes to moral questions ". So if you say that you are a utilitarian but you don't reason in a utilitarian way then you seem to have changed the target of the conversation.
This is a great article. It seems people constantly use naive utilitarianism as an argument against utilitarianism which just seems wild to me. Even on the basis of an arguing that is self effacing is bad (which seems to be true), utilitarianism arguably isn’t even self effacing some of the time. If utilitarianism is defined as “make the outcome with most most net positive utility result” as opposed to “take the action that will result in the most net positive utility,” it’s not even telling you to use a different theory when the practical implication would be to not always think about what’s actually going to maximize utility.