It seems so obvious that cluelessness wouldn't be a decisive objection. We can see this through the following.
(1) The correct moral theory will be true in all possible worlds.
(2) There are an infinite number of possible worlds in which agents are clueless about whether the correct morality proscribes most actions.
Therefore, the fact that a theory generates moral cluelessness in a world doesn't mean it is false.
If this is true we should accept
(3) The fact that a theory generates moral cluelessness in the actual world doesn't mean it is false.
Even ignoring unintended consequences, as Lenman proposes, doesn't avoid cluelessness. We can imagine possible worlds in which there are very obvious consequences but they're hard to stack up (e.g. each time we move 200 drones bomb 8300^128 earthworms, but save 59^128 cattle).
This was very insightful, it’s been a while since I’ve dipped my toes in consequentialist theory. I’m particularly stuck by your comments on how what fundamentally matters is epistemologically prior to if we can track it.
This post seems to be making two contradictory arguments. At one point, I thought you were arguing that what matters morally are only the clearly foreseeable consequences of our actions (since only those can enter our EV calculations, and our EV calculations tell us what matters morally). But then you dispute Lenman when he says that all that matters morally are the foreseeable consequences of our actions.
You write:
"if we’ve no idea what the long-term consequences will be, then these “invisible” considerations are (given our evidence) simply silent—speaking neither for nor against any particular option."
If this is actually how EV calculations work, by simply ignoring the unforeseeable consequences of our actions, then by fiat they avoid the problem of cluelessnness—which is essentially that there are unforeseeable consequences of our actions, and those unforeseeable consequences have far more value (and thus morally matter more) than the clearly foreseeable consequences. But then the question is, why should a utilitarian think that EV calculations tell us what matters morally, if what utilitarians care about is the value of outcomes—since EV calculations only purport to tell us about the tiniest sliver of those?
You write:
"I don’t think I need to commit to any particular principle of indifference in order to say that I haven’t yet been presented with any compelling reason to revise my expected value estimate of +1 life saved."
But you seem to acknowledge such evidence earlier in this post when you refer to, "reasons to do with the extreme fragility of who ends up being conceived, such that even tiny changes may presumably ripple out and completely transform the future population." That is a compelling reason to acknowledge that the action of saving a life will not result in just +1 life saved and no other difference in value. Of course we have no idea what that difference in long term value might be, but that's precisely the cluelessness problem.
You write, "To undermine an expected value verdict, you need to show that some alternative verdict is epistemically superior."
Why is that? I take it that the objection to acting on the basis of EV is that we cannot credibly calculate the value of our actions; *any* verdict we propose about the value of our actions is pure fantasy, including any possible proposed alternative. The opponent of EV utilitarianism would not attempt to propose some epistemically superior alternative because the point is that all attempts to come up with a calculation for the value of our actions are futile.
Your point that acting in accordance with EV calculations is "the best we can non-accidentally do!" might be your counterargument to this. I recommend reading Fred Feldman's "Actual Utility, the Objection from Impracticality, and the Move to Expected Utility." Feldman could be seen as responding to your point by saying, "But we can't even do that much because we can't make EV calculations about the value of our actions in the first place!" It's possible Feldman is rejecting your apparent assumption that EV-calculations are only supposed to consider the clearly foreseeable outcomes of actions, but I think he raises problems even for EV-calculations meant to track clearly foreseeable consequences. I believe he raises serious problems for your claim that, "I don’t think it makes sense to question our trust in expected value."
The cluelessness dialectic between you and non-utilitarians (or objective utilitarians, in Feldman's case) seems to be something like this. The non-utilitarians or objective utilitarians think that utilitarianism is about producing the best outcomes. You respond, no, it's about doing the actions recommended by EV calculations. The opponents say, but we can't make credible EV calculations. And you respond, yes we can, so long as we only include clearly foreseeable consequences in our EV calculations. I suppose the opponent would then say this: "if all that matters morally are the consequences of our actions, why do we think only the *clearly foreseeable* consequences of our actions matter morally? Surely pain, suffering, pleasure, life and death matter morally even when not foreseen!" And here is where it seems to me the contradictory aspect of your post seems to arise: you appear to agree with this last objection, in the section titled "Ethics and What Matters." But if you agree with this, you seem to be conceding what I take is the main point of your opponents.
It seems to me your opponents are making this point: "Why is utilitarianism concerned about the best we can non-accidentally do? Isn't it just supposed to care about the best simpliciter?" If utilitarianism is concerned with the mental states of moral agents, and in particular whether outcomes are brought about intentionally or accidentally, utilitarianism may be losing some of its intuitive appeal that was grounded in the very simple idea that all that matters morally is the amount of good and bad in the universe. And indeed, you ultimately seem to concur with this in the "Ethics and What Matters" section, in which you seem to argue against the view that what matters morally is what we can foresee. But then I don't understand how EV calculations are meant to work. If we know that saving a child is not really just +1 life, because of all the unforeseeable consequences, and we know that unforeseeable consequences matter morally—yet our EV calculations are only able to tell us that saving a child is +1 life—why think that following EV calculations is acting in adherence with morality?
One option for utilitarians is to accept 1 and 2 (at the beginning of your post) and reject 3. Your conclusion to your post is friendly to this option, even though you say you think 2 is almost certainly false. I think the tension I see in your post arises because you do not fully embrace 2, and yet in the latter sections of your post, you make points that imply we should embrace 2.
If we accept 1 and 2 but not 3, we could have utilitarianism while accepting the implications of cluelessness that we do not know the best actions for us to take—but that would be okay because a moral theory is not supposed to tell us the best actions we can take. As you write, "what fundamentally matters is epistemically prior to the question of whether we can reliably track it." So why hedge? Why not just embrace that we cannot reliably track what matters morally?
Wouldn't cluelessness also undermine the prudential and egoistic deliberation about what is good for myself? I mean, if utilitarianism goes down, then so does general prudential deliberation about consequences of what is good for me in the longterm like saving money, etc.
Dude, why doesn't someone just get it over with an argue for love consequentialism -- do whatever will bring about the most love in the world. Because then every little act of love adds to that and theres no chance of that being a bad thing long term. Damn Richard, you've been working on this shit for years with a family and haven't love in your analysis of morality? Da fuck?
I agree that Lenman doesn't do a satisfying job of explaining why the same problem doesn't spread over to non-consequentialism. What one wants is a metaphysical way of demarcating those consequences that don't matter from those that do. If it's of interest, I've tried to do that here—although cluelessness only comes in at the end. Obviously you won't buy the moral distinctions the paper relies upon, but it aims to be the makings of a principled, non-consequentialist response, that isn't available to the consequentialist. https://web.mit.edu/tjbb/www/SLL.pdf
Thanks for pointing to Lenman, his concerns seem similar to mine.
What seems missing from the discussion here is the alternatives being considered. I would roughly divide them into strict consequentialism, permissive consequentialism, and contextual consequentialism. Strict C is just do the math, you need an estimate for everything and if it isn’t in your equation it doesn’t matter. I am not sure what permissive C would be, but something a bit more reasonable than strict C, but still arguably consequentialism (assuming that is what the post advocates). What I am calling contextual consequentialism is the idea that when you are in a high information context, you do the math (hospital budgets) and when you are not you use heuristics based on historical experience (sort of and explore/exploit strategy). Deciding what sort of info environment you are in is a judgement call that depends on your info and your history.
Is contextual consequentialism really consequentialist? Doesn’t matter what the label is. It needn’t dismiss consequences, but it sees them as being limited by the context, with alternatives available in the form of heuristics.
Saving the drowning child is a high info environment. Generalizing that to saving persons elsewhere depends on there being an analogous information context.
I’m not sure what implications this would have for low probability events in the far future. We don’t have heuristics for that, and the info environment is low. Maybe I side with Lenman here. We can wish we had sufficient info to make such decisions, or pretend we do, but using heuristics seems equally justified. One way or the other, we are making educated, well reasoned (we hope) guesses.
Advantages of heuristics include better compatibility with a legal system and its concomitant cultural understanding of what people can expect from each other; and a higher degree of low-effort consensus.
Generating and executing a strategy based on far future predictions and estimates is informationally and socially costly. The risk is that some low probability event that was avoidable only before it was easily foreseen will exploit a bug in our heuristics. This advises a fail-soft strategy, because if you squint right, strict consequentialism is a high info expenditure version of contextual consequentialism, also vulnerable to catastrophic error. So the ultimate question becomes, how do we arrange for humanity to survive a serious catastrophe without actually preventing it?
> It’s surely conceivable that some agents (in some possible worlds) may be irreparably lost on practical matters. Any agents in the benighted epistemic circumstances (of not having the slightest reason to think that any given action of theirs will be positive or negative on net) are surely amongst the strongest possible candidates for being in this deplorable position. So if we conclude (or stipulate) that we are in those benighted epistemic circumstances, we should similarly conclude that we are the possible agents who are irreparably practically lost.
>To suggest that we instead revise our account of what morally matters, merely to protect our presumed (but unearned) status as not totally at sea, strikes me as a transparently illegitimate use of “reflective equilibrium” methodology—akin to wishfully inferring that causal determinism must be false on the basis of incompatibilism plus a belief in free will.
No idea what Lenman would say here, but I think this argument can 100% be made to work. It's a version of the 'moral realism is epistemically self-refuting' argument that specifically applies to a consequentialist theory of the good. Part of our bad epistemic circumstances, if they existed, would be that we would have no epistemic access to 'the good' at all: if we were so epistemically lost as to have no idea about any particular good action, then I don't see how we could at all be justified when reasoning about the good in general. We can then argue by cases as follows: *if* the structure of 'the good', at a metaphysical level, were consequentialist, *then* we couldn't know it; if it weren't consequentialist, we also couldn't know that it was consequentialist (because knowledge is veridical); ergo anyone who claims to know that the good has a consequentialist structure, at a metaphysical level, is wrong. I'm not sure if this argument is sound, because I'm not sure the first premise is true (as you hint, we might know at least some things about the long-term impacts of our actions); but it's definitely valid.
It seems so obvious that cluelessness wouldn't be a decisive objection. We can see this through the following.
(1) The correct moral theory will be true in all possible worlds.
(2) There are an infinite number of possible worlds in which agents are clueless about whether the correct morality proscribes most actions.
Therefore, the fact that a theory generates moral cluelessness in a world doesn't mean it is false.
If this is true we should accept
(3) The fact that a theory generates moral cluelessness in the actual world doesn't mean it is false.
Even ignoring unintended consequences, as Lenman proposes, doesn't avoid cluelessness. We can imagine possible worlds in which there are very obvious consequences but they're hard to stack up (e.g. each time we move 200 drones bomb 8300^128 earthworms, but save 59^128 cattle).
This was very insightful, it’s been a while since I’ve dipped my toes in consequentialist theory. I’m particularly stuck by your comments on how what fundamentally matters is epistemologically prior to if we can track it.
Good brain food, thanks for sharing!
This post seems to be making two contradictory arguments. At one point, I thought you were arguing that what matters morally are only the clearly foreseeable consequences of our actions (since only those can enter our EV calculations, and our EV calculations tell us what matters morally). But then you dispute Lenman when he says that all that matters morally are the foreseeable consequences of our actions.
You write:
"if we’ve no idea what the long-term consequences will be, then these “invisible” considerations are (given our evidence) simply silent—speaking neither for nor against any particular option."
If this is actually how EV calculations work, by simply ignoring the unforeseeable consequences of our actions, then by fiat they avoid the problem of cluelessnness—which is essentially that there are unforeseeable consequences of our actions, and those unforeseeable consequences have far more value (and thus morally matter more) than the clearly foreseeable consequences. But then the question is, why should a utilitarian think that EV calculations tell us what matters morally, if what utilitarians care about is the value of outcomes—since EV calculations only purport to tell us about the tiniest sliver of those?
You write:
"I don’t think I need to commit to any particular principle of indifference in order to say that I haven’t yet been presented with any compelling reason to revise my expected value estimate of +1 life saved."
But you seem to acknowledge such evidence earlier in this post when you refer to, "reasons to do with the extreme fragility of who ends up being conceived, such that even tiny changes may presumably ripple out and completely transform the future population." That is a compelling reason to acknowledge that the action of saving a life will not result in just +1 life saved and no other difference in value. Of course we have no idea what that difference in long term value might be, but that's precisely the cluelessness problem.
You write, "To undermine an expected value verdict, you need to show that some alternative verdict is epistemically superior."
Why is that? I take it that the objection to acting on the basis of EV is that we cannot credibly calculate the value of our actions; *any* verdict we propose about the value of our actions is pure fantasy, including any possible proposed alternative. The opponent of EV utilitarianism would not attempt to propose some epistemically superior alternative because the point is that all attempts to come up with a calculation for the value of our actions are futile.
Your point that acting in accordance with EV calculations is "the best we can non-accidentally do!" might be your counterargument to this. I recommend reading Fred Feldman's "Actual Utility, the Objection from Impracticality, and the Move to Expected Utility." Feldman could be seen as responding to your point by saying, "But we can't even do that much because we can't make EV calculations about the value of our actions in the first place!" It's possible Feldman is rejecting your apparent assumption that EV-calculations are only supposed to consider the clearly foreseeable outcomes of actions, but I think he raises problems even for EV-calculations meant to track clearly foreseeable consequences. I believe he raises serious problems for your claim that, "I don’t think it makes sense to question our trust in expected value."
The cluelessness dialectic between you and non-utilitarians (or objective utilitarians, in Feldman's case) seems to be something like this. The non-utilitarians or objective utilitarians think that utilitarianism is about producing the best outcomes. You respond, no, it's about doing the actions recommended by EV calculations. The opponents say, but we can't make credible EV calculations. And you respond, yes we can, so long as we only include clearly foreseeable consequences in our EV calculations. I suppose the opponent would then say this: "if all that matters morally are the consequences of our actions, why do we think only the *clearly foreseeable* consequences of our actions matter morally? Surely pain, suffering, pleasure, life and death matter morally even when not foreseen!" And here is where it seems to me the contradictory aspect of your post seems to arise: you appear to agree with this last objection, in the section titled "Ethics and What Matters." But if you agree with this, you seem to be conceding what I take is the main point of your opponents.
It seems to me your opponents are making this point: "Why is utilitarianism concerned about the best we can non-accidentally do? Isn't it just supposed to care about the best simpliciter?" If utilitarianism is concerned with the mental states of moral agents, and in particular whether outcomes are brought about intentionally or accidentally, utilitarianism may be losing some of its intuitive appeal that was grounded in the very simple idea that all that matters morally is the amount of good and bad in the universe. And indeed, you ultimately seem to concur with this in the "Ethics and What Matters" section, in which you seem to argue against the view that what matters morally is what we can foresee. But then I don't understand how EV calculations are meant to work. If we know that saving a child is not really just +1 life, because of all the unforeseeable consequences, and we know that unforeseeable consequences matter morally—yet our EV calculations are only able to tell us that saving a child is +1 life—why think that following EV calculations is acting in adherence with morality?
One option for utilitarians is to accept 1 and 2 (at the beginning of your post) and reject 3. Your conclusion to your post is friendly to this option, even though you say you think 2 is almost certainly false. I think the tension I see in your post arises because you do not fully embrace 2, and yet in the latter sections of your post, you make points that imply we should embrace 2.
If we accept 1 and 2 but not 3, we could have utilitarianism while accepting the implications of cluelessness that we do not know the best actions for us to take—but that would be okay because a moral theory is not supposed to tell us the best actions we can take. As you write, "what fundamentally matters is epistemically prior to the question of whether we can reliably track it." So why hedge? Why not just embrace that we cannot reliably track what matters morally?
Wouldn't cluelessness also undermine the prudential and egoistic deliberation about what is good for myself? I mean, if utilitarianism goes down, then so does general prudential deliberation about consequences of what is good for me in the longterm like saving money, etc.
Dude, why doesn't someone just get it over with an argue for love consequentialism -- do whatever will bring about the most love in the world. Because then every little act of love adds to that and theres no chance of that being a bad thing long term. Damn Richard, you've been working on this shit for years with a family and haven't love in your analysis of morality? Da fuck?
I agree that Lenman doesn't do a satisfying job of explaining why the same problem doesn't spread over to non-consequentialism. What one wants is a metaphysical way of demarcating those consequences that don't matter from those that do. If it's of interest, I've tried to do that here—although cluelessness only comes in at the end. Obviously you won't buy the moral distinctions the paper relies upon, but it aims to be the makings of a principled, non-consequentialist response, that isn't available to the consequentialist. https://web.mit.edu/tjbb/www/SLL.pdf
Thanks for pointing to Lenman, his concerns seem similar to mine.
What seems missing from the discussion here is the alternatives being considered. I would roughly divide them into strict consequentialism, permissive consequentialism, and contextual consequentialism. Strict C is just do the math, you need an estimate for everything and if it isn’t in your equation it doesn’t matter. I am not sure what permissive C would be, but something a bit more reasonable than strict C, but still arguably consequentialism (assuming that is what the post advocates). What I am calling contextual consequentialism is the idea that when you are in a high information context, you do the math (hospital budgets) and when you are not you use heuristics based on historical experience (sort of and explore/exploit strategy). Deciding what sort of info environment you are in is a judgement call that depends on your info and your history.
Is contextual consequentialism really consequentialist? Doesn’t matter what the label is. It needn’t dismiss consequences, but it sees them as being limited by the context, with alternatives available in the form of heuristics.
Saving the drowning child is a high info environment. Generalizing that to saving persons elsewhere depends on there being an analogous information context.
I’m not sure what implications this would have for low probability events in the far future. We don’t have heuristics for that, and the info environment is low. Maybe I side with Lenman here. We can wish we had sufficient info to make such decisions, or pretend we do, but using heuristics seems equally justified. One way or the other, we are making educated, well reasoned (we hope) guesses.
Advantages of heuristics include better compatibility with a legal system and its concomitant cultural understanding of what people can expect from each other; and a higher degree of low-effort consensus.
Generating and executing a strategy based on far future predictions and estimates is informationally and socially costly. The risk is that some low probability event that was avoidable only before it was easily foreseen will exploit a bug in our heuristics. This advises a fail-soft strategy, because if you squint right, strict consequentialism is a high info expenditure version of contextual consequentialism, also vulnerable to catastrophic error. So the ultimate question becomes, how do we arrange for humanity to survive a serious catastrophe without actually preventing it?
> It’s surely conceivable that some agents (in some possible worlds) may be irreparably lost on practical matters. Any agents in the benighted epistemic circumstances (of not having the slightest reason to think that any given action of theirs will be positive or negative on net) are surely amongst the strongest possible candidates for being in this deplorable position. So if we conclude (or stipulate) that we are in those benighted epistemic circumstances, we should similarly conclude that we are the possible agents who are irreparably practically lost.
>To suggest that we instead revise our account of what morally matters, merely to protect our presumed (but unearned) status as not totally at sea, strikes me as a transparently illegitimate use of “reflective equilibrium” methodology—akin to wishfully inferring that causal determinism must be false on the basis of incompatibilism plus a belief in free will.
No idea what Lenman would say here, but I think this argument can 100% be made to work. It's a version of the 'moral realism is epistemically self-refuting' argument that specifically applies to a consequentialist theory of the good. Part of our bad epistemic circumstances, if they existed, would be that we would have no epistemic access to 'the good' at all: if we were so epistemically lost as to have no idea about any particular good action, then I don't see how we could at all be justified when reasoning about the good in general. We can then argue by cases as follows: *if* the structure of 'the good', at a metaphysical level, were consequentialist, *then* we couldn't know it; if it weren't consequentialist, we also couldn't know that it was consequentialist (because knowledge is veridical); ergo anyone who claims to know that the good has a consequentialist structure, at a metaphysical level, is wrong. I'm not sure if this argument is sound, because I'm not sure the first premise is true (as you hint, we might know at least some things about the long-term impacts of our actions); but it's definitely valid.