Discover more from Good Thoughts
Hidden Challenges and Missed Engagements
Peering through the dialectical fog
Social media notoriously directs us into bubbles and echo chambers. But the incentives to engage more with like-minded individuals are more general, and also found in academia. It seems rather rare to find cutting-edge work on “big picture” debates like consequentialism vs deontology, for example. Part of this may stem from hyper-specialization, as people work on the next epicycle of some highly specific sub-debate. But it also seems like we naturally cluster into different philosophical “camps” (utilitarians, Kantians, “common sense” moralists, etc.), to hunker down and work out the internal details of our preferred views. There are good reasons for that, of course—we often learn more from those who are philosophically “closer” to us, and sympathetic to our general approach, whereas trying to understand and explain ourselves to those more distant can be a frustrating experience, rife with mutual misunderstanding (and negative referee reports). But there’s a risk that cross-camp debates end up unduly neglected, which seems especially unfortunate when these are, after all, the bigger and more important questions.
Hence this post. After highlighting some challenges/arguments that I’d be especially keen to see my philosophical opponents address, I’ll invite anyone else to comment with challenges or objections to my views—anything you think I might be unduly neglecting or would benefit from grappling with more explicitly. But first…
A note on the limits of argumentation
Sometimes people assume that an argument they personally find unconvincing is thereby “question-begging” or otherwise worthless. This is a mistake. A determined opponent can always just reject a premise; that’s inevitable. Arguments can’t force people to change their minds, so that isn’t a realistic expectation.
We do better to think of arguments as highlighting neglected costs (of rejecting the conclusion), and inviting those who nonetheless reject our conclusions to (i) seriously consider which costs they’re willing to accept (i.e. which premises to reject), and (ii) suggest any counterarguments that mitigate the apparent cost of their preferred move (or perhaps even show it to be a “feature” rather than a “bug”). In a successful dialectic, everyone leaves with a clearer view of the costs and benefits of the competing views on offer.
A question-begging argument is one that offers no such illumination. The conclusion is so transparently contained within the premises that there is no conceivably “neglected” consideration there to highlight—nothing that might, for example, help to sway a “fence-sitter” who was as-yet-undecided about whether to accept the conclusion. Any such fence-sitter would necessarily be just as undecided about the question-begging premise.
In general, rather than just asking whether you could (or even do) reject this or that premise, it’s often more worthwhile to try to evaluate which claim (the premise or its negation) is overall more plausible. You could believe just about anything, after all. But it’s better to believe things that are more plausible rather than less so. Accordingly, a good objection to an argument involves explaining why it’s more plausible to reject a certain premise than to accept it. This is to make an essentially comparative claim.
Several of my recent posts have tried to explain my moral perspective in a way that I hope will seem plausible and compelling even to some who might not have been inclined to agree with me beforehand. For example, I suspect that at least some (esp. non-philosophers) may be drawn to deontology because they endorse the norms in practice, not because they necessarily believe them to be non-instrumentally justified. So simply highlighting this distinction could go some way towards convincing such individuals.
For another example, my claim that “Competing norms cannot plausibly claim to be more important, in principle, than people’s lives and well-being” is one that strikes me as having a lot of intrinsic credibility. Simply raising it to salience, and noting the conflict with deontological theory, might convince some fence-sitters to lean against the latter, since rejecting this claim seems a real cost of that theory.
I wouldn’t expect either of these points to sway a committed deontologist. (I don’t really expect anything to sway a committed deontologist, though you never know when someone might surprise you.) But I think it’s worth presenting such arguments nonetheless, because they could—especially in combination with further arguments—reasonably sway those who aren’t yet committed.
I’d like for there to be more such argumentative presentations—in both directions—since I don’t think we currently have a good collective grasp on what the balance of reasons in the consequentialism-deontology debate even looks like. (And presumably the same is true of many other important debates.) What would a deontologist say to try to convince a fence-sitter who was starting to feel swayed by my normativity objection and related arguments? Their view is sufficiently alien to me that I’m not able to predict this. But it would be good to know! Conversely, many non-consequentialists say things that lead me to think that they’ve internalized deep misconceptions about my view. (Indeed, I’d say about half of the most common objections to utilitarianism rest on outright misconceptions rather than legitimate differences of opinion. Though that still leaves plenty of room for the latter too, of course!)
Challenges I’d like to highlight
‘Three Arguments for Consequentialism’ summarizes some of my favourite arguments here. Though—especially for the ‘Master Argument’—one would need to follow the supporting links to get the full picture.
As indicated above, I’m very curious what deontologists can say to try to make their fundamental principles sound more intrinsically credible, given their conflict with ex ante Pareto. What makes their interest-independent moralizing appreciably different from conservative moralizing, for example?
I’m also curious how they’d deal with my new paradox of deontology.
And, given how strongly most deontologists rely on intuitive “counterexamples”, I’d like to hear more about why the intuitive accommodations of deontic fictionalism or two-level consequentialism don’t defang these worries, at least to a significant extent. (I grant that some may just brutely intuit that there are additional non-instrumental reasons in these cases. That’s fine. But I’m wondering how confident they can be that this verdict is obvious or decisive, especially when more pure variants of their “counterexamples”, like Martian Harvest, don’t seem the slightest bit intuitively embarrassing to utilitarians—suggesting that there is no real “bullet” to bite here after all, and the strength of the original intuitions instead stems from confounding factors.)
For those who claim, more strongly, that there’s something objectionably “instrumental” or otherwise obviously wrong with the way that consequentialism values people, I’d especially urge engagement with my post, ‘Theses on Mattering’. I really think this is a case where the critics are flatly mistaken. But if I’m wrong about that, I’d love to hear the counterargument.
More generally, I’d love to see critics of utilitarianism engage with the strongest version of the view, rather than the straw-man caricature that’s in their heads.
For Critics of Effective Altruism
Here I’m mostly curious about whether these folks actually reject scope-sensitive beneficentrism, and think there’s something inherently objectionable about the core idea of “doing good better” (which remains bizarrely rare, after all!), or if they secretly endorse this central plank of EA philosophy and are really just quibbling about strategy and edge cases (e.g. the expected value of anti-capitalist politics). Those latter disagreements are still important, of course, but they’re surely best thought of as internal questions about how best to pursue our shared goals, rather than justifying the wholesale opposition and hostility that one tends to actually find from EA’s most vocal critics.
For proponents of narrow person-affecting views
Without committing the Epicurean fallacy, what advantage does the narrow view offer over a principled hybrid that combines impersonal and person-affecting reasons? As I explain the challenge there:
We all agree that you can harm someone by bringing them into a miserable existence, so there’s no basis for denying that you can benefit someone by bringing them into a happy existence. It would be crazy to claim that there is literally no reason to do the latter. And there is no theoretical advantage to making this crazy claim. (As I explain in ‘Puzzles for Everyone’, it doesn’t solve the repugnant conclusion, because we need a solution that works for the intra-personal case — and whatever does the trick there will automatically carry over to the interpersonal version too.) So the narrow person-affecting view really does strike me as entirely unmotivated.
The challenge then carries over to population-ethics objections to longtermism:
[T]his very natural hybrid view still entails the basic longtermist claim that we’ve very strong moral reasons to care about the distant future (and strongly prefer flourishing civilization over extinction). So the notion that longtermism depends on stark totalism is simply a mistake.
It remains sadly common for philosophers to flippantly reject longtermism on the (false) presumption that it depends upon totalism. I’d encourage them to think more carefully on this.
What are the biggest outstanding challenges to my views?
Comments welcome! I’ll try to reply to serious suggestions as time allows.