Leaving details blank is not a theoretical virtue
I loved this post - this is a big pet peeve of mine as well and I think you nailed it.
However, I think a lot of times when I see similar arguments 'in the wild', even if they are initially framed narrowly as critiques of utilitarianism they are in fact motivated by a broader feeling that there are limits to moral reasoning. Something like, we shouldn't expect our theories to have universal domain, and we don't get much leverage by trying to extend our theories far beyond the intuitions that initially motivated them.
The main example I have in mind is Tyler Cowen's recent conversation with Will. Tyler raises a number of objections to utilitarianism. At times I found this frustrating, because if viewed from the lens of figuring out the best moral theory he is making isolated demands for rigor. But I think Tyler's point instead is something more like the above, that we shouldn't rely too much on our theories outside of everyday contexts.
You do touch on this in the post, but only briefly. I'd be interested to hear more about your thoughts on this issue.
This is a great article! You don't need to accept the repugnant conclusion to be an effective altruist. You just need to think helping people is important; the most obvious conclusion ever. Rejecting this trivial conclusion would be the really repugnant conclusion.
Caveat: I'm not a philosopher, but rather an economist.
I think many of these paradoxes (Quinn's Self-Torturer, Parfit's "mere addition," etc.) have the following form:
> Start from state S. Operation O(S) is locally preferable (i.e., it produces a preferred state S'.) But if we iterate ad infinitum, we end up with S* that's not preferable to S.
The conclusion is usually either that S* actually _is_ preferable (i.e., our preferences are "rational" and therefore transitive), or that our preferences are seriously suspect. To the point where "maximizing" them is a hopelessly muddled concept.
I think there's another way to approach this. Behavioral economics deals with such problems ("time-inconsistent preferences") routinely. Consider a would-be smoker. He doesn't smoke his first cigarette, because he knows that his preferences display habit formation --- his first cigarette leads to the second, and so on.
In other words, the time 0 self has a genuinely different axiology than the time _t_ self. (Equivalently, preferences are state-dependent.) It would definitely be _cleaner_ if our rankings of future worlds were invariant to where we are today, but if the choice is between axiomatic hygiene and uncomfortable paradoxes, I'll take the mess.
(I think this also has something to say about, e.g., the demandingness objection. It's always locally preferable to save one more child, but the agent is justifiably wary of committing to a sequence of operations which turns him into a child-rescuing drone.)
The best argument for "stop thinking" might be Joseph Henrich's one that for most of human existence trying to think for yourself rather than imitating tradition was one of the worst things you could do. Of course he had to do a lot of research & thinking to arrive at that abstract point!
I would say that I am against systematic theorizing of sorts, but I wouldn't say I've stopped thinking. My views are largely in line with Huemer, who doesn't have a clearly defined axiomatic system but clearly hasn't stopped thinking. (Unless I misunderstand what you mean). But I do accept the Repugnant Conclusion like Huemer. Actually, your article on population ethics on utilitarianism.net was influential in that regard. In fact, I mistakenly thought you took the total view because of that article. Whoops! (But great article)
I found some of Hoel's arguments weak, and I am saddened to see that he deleted your comment. Also, hyperbolic to analogize utilitarianism to a poison even if you disagree. I recall seeing your comment, but now I can't find it. Very dissapointing behavior.
Hoel's critique isn't the best. He doesn't allow for tradeoffs between certain values which is results in some absurdities. For example hiccups for shark attack. But clearly we always do this probalistically. I responded:
"I also want to provide a possible critique of the shark example. Surely, you would acknowledge that when people go swimming they risk being ripped to shreds by a shark. If you don't find it immoral for little girls to swim in the ocean, it means there is a probability of a little girl getting eaten by a shark that you find acceptable to trade off for playing in the ocean. Perhaps it's as small as 0.00001%. But what this says is that we can make these sorts of comparisons between something horrific and something trivial like hiccups. Unless, you don't think little girls should be allowed to swim in the ocean or that swimming in the ocean is a higher good or something."
I'm basically sympathetic to your arguments here, but I really don't grasp intuitively why rejecting
(ii) utopia is better than a barren rock
is so repugnant.
Or, maybe more to the point, the reason to prefer Utopia to a barren rock is because of the preferences of people who actually exist... But in the absence of people to hold such a preference, it feels much less nihilistic.
Preferences also seem to me to solve the intrapersonal version of the neutrality paradox: we prefer future moments of value to future non existence because of our preferences, not because of some overriding reason that the former is better than the latter.
I think I can imagine someone who genuinely feels like they have gotten all they want out of life, and is indifferent between continuing to live or dying, and while I might find that alien or unfamiliar, I don't find it _wrong_.
Am I missing something here?
I don't recall the comment you're referencing, but likely the reason your comment was deleted was because it was against the moderation policy of The Intrinsic Perspective, which disallows hostility, yelling at people IN ALL CAPS, name-calling, or just general cantankerousness.
Well said. Like Parfit literally discusses the repugnant conclusion and the non-identity problems as two sides of the same coin, and it's not as if deontology doesn't run head first into the latter through the usual slave creation problems.
For what it's worth, I don't believe that total utilitarianism commits you to biting the bullet over the repugnant conclusion (as opposed to non-identity problem). You can be a total utilitarian (as I am), but just have a restricted scope on what you thing is valuable/important and hence in need of maximization.
I believe it really does depend on your substantive meta-ethical views. If you are in some sense constructivist about value (i.e. you believe things are good because we value them and not vice versa), then there is no question begging argument to create people (see: people should create people because life is valuable; but their lives are valuable only if these people value their lives; but they would only value their lives if they existed in the first place; and they would only exist if we should create them, and we're back at the beginning).
I think this post overlooks the way that utilitarian systems only work if they can solve all problems, while many other systems of ethics continue to work if their axiom sets are incomplete. Utilitarianism claims that a simple set of axioms can answer all moral problems. Before you can use utilitarianism to answer any questions, you need to choose a set of axioms. These then necessarily apply to all problems - otherwise, you would need some rule saying where they stop applying, and I don’t recall seeing any serious utilitarians proposing these. Without a stopping point given in an explicit rule, utilitarianism depends on something that is repugnant.
Most philosophical systems I know have some kind of repugnant consequence - that’s why philosophy isn’t solved. But the repugnant conclusions arising from those systems are different. Kant’s categorical imperative says you should tell a murderer where their next victim is, and you can’t accept the categorical imperative unless you’re okay with that. But you can accept the categorical imperative without having clear opinions about extreme world states. Utilitarianism has a different trade-off. You can’t accept any specific set of utilitarian axioms without unless you’re okay with the corresponding claim about extreme world-states.
Silence isn’t necessarily a virtue, but sometimes it’s better than being wrong.
What are your thoughts on moral particularism? Is it so obviously misguided as to not even require explicit rejection?
This is a great post! Somehow Utilitarianism has become an easy target for difficult problems, likely as you say because it is sufficiently rigorous to surface them.
I'm curious as to whether anyone has done work around moral uncertainty and randomness for some of these cases: for example, with the Repugnant Conclusion, what does it get us to recognise that we are going to be uncertain about the actual day to day experience of future people? And that it will, in fact, vary from hour to hour in any case - as ours does every day? So by pushing a vast number of people on average close to the "barely worth living" line, at any particular time many of them will actually be under that line, due to the stochastic nature of human experience.
Does it buy us anything to say that this world is, at any particular time, clearly worse for (say) the current bottom 10% than an alternative world with fewer, happier, people, and that this bottom 10% might in practice represent a very considerable number? How might we account for this in our reasoning?