When Metaethics Matters
And how it might affect our practical commitments
Whereas “first order” normative ethics addresses the question of what we ought to do, metaethics asks higher-order questions about the nature of normative ethics: Is it objective? If there are moral facts, what kind of facts are they, and how can we know about them? Stuff like that.
It’s often assumed that metaethics and normative ethics are independent: our answers to one don’t significantly constrain the other. But that may be an over-simplification. It seems to me that there are at least a few key questions in metaethics for which different answers could be expected to lead to very different practical commitments.
Rationalism vs Sentimentalism
There’s a deep divide between systematic theorizers (who inevitably end up committed to some pretty revisionary verdicts) and “anti-theorists” in the tradition of Bernard Williams, who seem to assume that morality cannot far outstrip our actual emotional responses, so that proposals which leave us “cold” are, for that very reason, morally suspect. We might dub the former group ‘rationalists’, and the latter ‘sentimentalists’.
For a test case, consider Jeff Sebo’s “rebugnant conclusion”: that a sufficiently vast number of insects could morally outweigh the interests of humanity. It’s an alarming prospect on any view, but if your immediate response is to think that it at least could be true, and is worth investigating further (perhaps in hopes of finding principles that would avoid this verdict), then you’re a bona fide rationalist, seeking the best abstract principles, and ready to follow where they lead. If your response is to instead sneer that this is transparently foolish nonsense, then you’re more likely a sentimentalist: firmly rooted in your gut reactions, and unwilling to radically revise them for the sake of anything so heartless as mere consistency.
I think it’s important that moral philosophers be rationalists (in the above sense). Sentimentalism (as understood here) is a kind of anti-philosophy, a refusal to reflect systematically on what matters. But we need such reflection, if we are to have any hope of uncovering new truths, or improving upon our untutored reactions. While it’s always possible for systematic thought to lead one further astray—an inconsistent Nazi is better than a consistent one!—careful philosophical inquiry remains our most reliable means of non-accidentally improving our epistemic position on moral matters. Or so I believe.
Rationalism may be motivated by reflecting on the historical fallibility of our gut reactions. If we think that past moral mistakes (from slavery to homophobia) should have been recognizable as such and overridden by suitably universal principles, we should expect the same to be true of some views that remain widespread today, as current views are surely not morally flawless.
Compared to sentimentalism, rationalism is more ambitious (aspiring to ideals of objectivity) and yet potentially more alienating (since it refuses to remain firmly rooted in our given emotional responses).
Metaethically, such rationalism is perhaps most naturally associated with robust moral realism, but is in principle compatible with a wide range of metaethical views including constructivism—so long as the latter doesn't place any limit on the possible inferential distance between our current emotional inclinations and the verdicts we would reach at the end of inquiry, having rendered our ultimate values fully consistent.
It’s an interesting question whether some forms of anti-realism might instead lend themselves to greater complacency (or emotional indulgence) in first-order ethics. This seems plausible to me. Given that most people do not in fact value consistency especially highly, it’s a bit unclear how anti-realists could hope to mandate ideal consistency as a constraint on anyone’s “true” values. If all normative reasons must be what Williams called “internal reasons”—reasons that can get a “latch” on the agent’s actual psychology and motivations—then I worry that this metaethical view will swiftly lead to sentimentalist complacency, at least for most people.
(Some of the best people I know are constructivists who happen to value consistency extremely highly. But I think they’re unusual! Personally, if I gave up on moral realism, I suspect I would be much less inclined to take seriously interests that outstrip my empathetic limits—including those of non-cuddly creatures and perhaps far-future generations. And I’m pretty sure I still value consistency more than most people! Even so, I might well struggle to give it such priority in the absence of a belief that this was objectively warranted, independently of how I happen to feel about the matter.)
So, one practical reason to advocate for moral realism is that it may make others (those who are relevantly similar to me, at least) more receptive to radical moral reform, when needed.
Stance-Independence and Fundamental Fallibility
Even for those who prioritize systematic coherence and consistency, there may be important practical implications to denying robust stance-independence to morality. For there may be some views that share both of the following features:
(i) We judge that they have a modicum of intrinsic plausibility, such that they warrant non-trivial credence (conditional on moral realism being true); and yet
(ii) We’re extremely confident that the most coherent systematization of our own values would not lead to this view (and so they warrant near-zero credence, conditional on moral anti-realism).
In my own case, prioritarianism may fit this bill. It’s a plausible enough view (in some sense), but I can’t imagine ever believing it myself. So I only give it credence on the assumption that there can be a gap between the moral truth and my own (even idealized) stance on things. That is: prioritarianism could only be true if I’m fundamentally mistaken about what matters. Which (I think) I could be! But that depends on moral realism.
I wonder if strict impartiality might play this role for others. After all, the main argument against impartiality is just that we really want to be partial. Which makes sense as an argument if you’re an anti-realist, and morality is just about what you want (in some suitably idealized sense). But however much you prefer to be partial, you’ve surely got to allow that a robustly objective, stance-independent moral truth, if such existed, could very well be more impartial than you’d like! It seems hard to deny the intrinsic plausibility of the claim that everyone’s interests matter equally, after all.
Finally, there’s the question of whether our metaethics supports the claim that moral verdicts really matter, or have authority over us, such that we’d be making a deep mistake were we to simply disregard them. As I’ll expand upon in future posts, this is something that reductive naturalist accounts really struggle with, as they tend to make morality a matter of mere semantics. If you give a semantics for the word ‘ought’ which makes true all and only Kantian claims about what one ought to do, but the explanation for this is that the meaning of ‘ought’ is determined non-normatively, say by the most common usage in my linguistic community, then I no longer see any reason to care about these “ought” facts. For they’ve become just another way to talk about how others talk about morality, and I have no deep reason to care about such talk. Others’ talk has no authority over me.
The only way that moral verdicts could reasonably override my own preferences is if they have suitable authority, and it’s hard to see how this authority could be reducible to any purely naturalistic status like “common usage in my linguistic community”. Of course, it can also be hard to see how there could be irreducible normative authority, as normativity can seem an inherently mysterious notion. But there’s an important difference here. Whereas normativity in general can be hard to get a firm conceptual grip on, I find it tolerably clear that if there is any real normativity at all, it must be irreducible. And when I say that it’s “hard to see” how normative authority could be naturalistically reducible, this is really just a polite way of saying that I think it is clearly false to claim that it is (or could be) reduced to something non-normative.
I’m not sure whether naturalists really disagree on this point, or whether they’ve long since given up on what I mean by ‘normative authority’, and are content to simply offer a naturalistic semantics for superficial ‘ought’ talk, without concern for vindicating normativity in any deeper sense. In support of the latter hypothesis, my sense is that many metaethical naturalists are also Humean instrumentalists about normative reasons, and so deny that agents necessarily have any reason to do what they “morally ought” to do (if it goes against the agent’s overall desires). That is, they denied that their own moral talk had any normative authority behind it. But this then seems a mere terminological variant of Error Theory: the view that there are no moral truths, or other normative facts.
While there may not be strict logical entailments between metaethical views and particular practical commitments, I’ve suggested that there may be looser connections that remain significant.
(1) Moral realism (and perhaps some forms of constructivism) may better support systematic theorizing of a sort that upholds the possibility of radical moral reform, whereas common forms of anti-realism might more naturally tend towards complacent “sentimentalism”, or the reinforcement of pre-existing attitudes.
(2) Moral realism, with its associated belief in stance-independent moral truths, encourages uncomfortable yet intrinsically plausible principles like impartiality. It also leaves room for us to assign credence to views that we’re very confident our idealized selves would (given our starting points) never end up believing.
(3) Some metaethical views (e.g. reductive naturalism) struggle to capture normative authority, and so risk robbing moral claims of their normative interest. Such views may make it unintelligible why we would want to revise our values to make them more “accurate”.
These points combine to suggest that robust (non-naturalist) moral realism may be uniquely well-situated to accommodate practically-significant moral uncertainty. Only on this view, I think, does it make sense for us to constrain our personal values in light of possible views that we’re confident we would never ourselves endorse.
This seems an important point. For the truth (on robust realism) may diverge significantly from your personal values—and what’s more, you can appreciate that there’s some sense in which the true values matter more than your personal values do. So it may make sense to “hedge your bets” by picking an option that scores slightly less well according to your own values, but vastly better according to other plausible (yet alien-to-you) values, only if you give sufficient credence to robust realism being the correct metaethical view.