13 Comments

I think this insight takes the force out of every objection to consequentialism. Very few people think “it would be great if the surgeons hand slipped and they killed the person and distributed their organs but it would be wrong to do that knowingly.” Most objections to consequentialism seem hard to stomach if you imagine that it would be good if the wrong act happened.

Expand full comment
Aug 29, 2023·edited Aug 30, 2023

Lexicalism doesn't necessarily violate transitivity. I think it does violate weak independence, transitivity, the independence of irrelevant alternatives (IIA), completeness *or* certain continuity assumptions. See https://www.researchgate.net/publication/303858540_Value_superiority

Weak independence: If an object e is at least as good as e’, then replacing e’ by e in any whole results in a whole that is at least as good.

Here are some specific transitive and complete lexical views and some of their other properties:

1. Leximin and lexical threshold negative utilitarianism satisfy weak independence and IIA, but violate even weak continuity assumptions.

2. Rank-discounted utilitarianism (e.g. Definition 1 in https://econtheory.org/ojs/index.php/te/article/viewFile/20140629/11607/345) satisfies IIA and some weak continuity assumptions, but violates weak independence.

3. Limited, partial and weak aggregation views can (I'd guess) be made to satisfy weak continuity assumptions (and transitivity), but violate IIA. I'm not sure if they can also be made to satisfy weak independence.

Your sequence argument doesn't work against rank-discounted utilitarianism, in particular, because that view violates weak independence and/or your argument isn't specific enough about the steps in value. For example, if we interpret the aggregated utilities as individual harms or bads and benefits or goods, then for any harm of degree x, there is some slightly lesser harm of degree y and a finite number N so that N harms of degree y are worse than one harm of degree x. Rank-discounted utilitarianism violates a uniform continuity assumption instead. The assumption would be that there is a finite difference in degree d>0 of harm such that no matter how great a harm x<0, an individual harm of degree x-d<x<0 can be outweighed by a finite number of lesser harms of degree x. But to even state this assumption, you need to assume you can measure differences in harms, and it's also much less intuitively obvious because of how abstract it is, and it's not clear why the same d>0 need work for all. (You could generalize it with metric spaces or uniform topological spaces to avoid differences and even distances between harms, but that's even more abstract.)

That being said all of the above views can be interpreted as aggregative in a sense (or possibly different senses), anyway. Leximin and LTNU can be represented by separable utility functions, even if not real-valued, while RDU can be represented with a real-valued utility function. Limited, partial and weak aggregation are by name aggregative. Truly non-aggregative views could be Scanlon's Greater Burden Principle / Regan's harm principle, according to which the greatest individual harm/potential loss is prioritized lexically above all others and which violates IIA, so it can't be represented by a utility function, although it could still be separable if we order lexicographically based on harm.

See also https://centerforreducingsuffering.org/research/lexical-views-without-abrupt-breaks/

Expand full comment

'One version of this argument runs roughly as follows: for any value entity claimed to be lexically worse than some given bad, we can construct a sequence of intermediate bads in which adjacent elements do not differ greatly in badness. Therefore, the argument goes, each bad in the sequence can plausibly be outweighed by a sufficiently large amount of the slightly milder bad.'

'To be specific: say that 3 e-objects are worse than any amount of e′-objects, but a single e-object is not (i.e. we have weak inferiority that kicks in at m = 3 e-objects). We can now construct a sequence in which there is no strong inferiority kicking in at any single step:

1 e′-object, 1 e-object, 2 e-objects, 3 e-objects'

I think I must be too mathematically challenged to follow this. I don't get how the sequence in the second quote is a counterexample to the claim in the first one.

Expand full comment
Aug 31, 2023·edited Aug 31, 2023

Assume that more e-objects than otherwise is worse, so N e-objects is worse than M e-objects if and only if N>M. Then:

1. 1 e′-object vs 1 e-object: some finite number of e'-objects is worse than 1 e-object by assumption (only weak inferiority, not strong inferiority)

2. 1 e-object vs 2 e-objects: 3 x 1 e-object=3 e-objects is worse than 2 e-objects

3. 2 e-objects vs 3 e-objects': 2 x 2 e-objects=4 e-objects is worse than 3 e-objects

so, at each step, only finitely many of the lesser (possibly composite) bad are required to outweigh the greater one.

Expand full comment
author

I find these formalisms very obscure. Is there an *intuitive* example of transitivity-respecting lexical values? (I don't really see much interest in the bare formal/logical possibility, if nothing that actually matters plausibly corresponds to the imagined formal structure.)

Expand full comment
Aug 31, 2023·edited Sep 1, 2023

Do you mean weakly continuous, transitive and lexical, or just transitive and lexical?

Heaven would be a standard example of something compatible with transitive strong lexicality. Going to Heaven is infinitely better than any finite number of Earthly goods. I guess you could claim that that's just because Heaven is eternal, so an infinite aggregate of finite goods, but you could also claim any minute of Heaven is infinitely better than any Earthly good.

Magnus gives possible examples of lexicality without strong lexicality between intermediates in a sequence: https://centerforreducingsuffering.org/research/lexical-views-without-abrupt-breaks/

One way to formalize this in a utility function would be to have the "small" values aggregated with vanishing marginal contributions summing at most to some finite limit, e.g. you sum additively and then apply arctan, the logistic function or some other increasing function with bounded range (e.g. https://en.wikipedia.org/wiki/Sigmoid_function), and then add that to the untransformed sum of lexical values.

Pick a bounded increasing function f. Then, with M non-lexical e' objects, and N weakly lexically dominating e objects, define your utility function by:

u(M, N) = f(M) + N

If sup_M f(M) - f(0) > 1, then you get e weakly lexically dominating e' without strongly lexically dominating it. Taking N > sup_M f(M) - f(0) gives the number of e objects to dominate any number of e' objects, but below that, enough e' objects can outweigh e objects.

Expand full comment

Presumably a 0.00000000000000000000000000000000000000000001% chance of 3-e objects ought to be *less* bad than 2-e objects if the gap in value really is "small" (it's not a counter-example to the first quote if it's not, right?) So since [some amount of]e' is worse than 2-e, and 2-e is worse than 0.00000000000000000000000000000000000000000001% chance of 3, [some amount of] e' is worse than a very low chance of 3e. I feel like the last result is going to play badly with "a certainty of 3e is worse than *any* amount of e'", even if I'm not sophisticated enough to be able to work out why off the top of my head. Or is that wrong? Like, suppose I *gradually* increase the chance of 3e from 0.00000000000000000000000000000000000000000001%, whilst I gradually pump up the number of e'. It seems like this should never *suddenly* switch to 'this chance of 3e is worse than any number of e'", for any chance of 3e <100%. But then, on reflection, why would some number of e's be able to outweigh 99.99999999999999999999999999999% chance of 3e, but not an 100% chance of it.

Am I just confused here? Is it just okay to reject the very first step of "very low chance of 3-e is less bad than certainty of 2-e"?

Expand full comment

I think what's meant in the gap of value between two (possibly composite) objects being "small" is exactly that neither is lexically inferior/superior to one another, i.e. some finite number of the one with lesser magnitude can outweigh the one with greater magnitude.

I don't think you should think about it in terms of uncertainty. Lexicality doesn't necessarily imply fanaticism (e.g. https://forum.effectivealtruism.org/posts/GK7Qq4kww5D8ndckR/michaelstjules-s-shortform?commentId=4Bvbtkq83CPWZPNLB). Instead, it's just adding more objects in uncertainty-free comparisons.

Expand full comment

I don't understand the forum comment*, so I can't really respond to that, except to say that surely any good theory ought to yield sensible results when dealing with uncertainty so 'I don't think you should think about it in terms of uncertainty' doesn't make much sense. And that since in practice we will always be somewhat uncertain about our actions, 'in case where you are certain of what the outcome will be, treat any amount of A as better than any amount of B' will have zero direct normative implications.

*I mean that in the literal sense of "it is over my head because I'm too dumb/lazy to figure it out and then also figure out its relevance in the current context, not the philosophers "I don't understand" where it means "I think this is confused".

Expand full comment

I don't mean that it doesn't need to yield sensible results when dealing with uncertainty, I just mean that that's independent of the issue at hand here, and thinking in terms of uncertainty can mislead.

Expand full comment
Aug 29, 2023·edited Aug 29, 2023

I don't think the Parfit's argument necessarily demonstrates that our evaluations must be aggregative. Instead, it could be that we should consider whole policies over all future decisions instead of making decisions locally. In the case where we have to (or expect to) make large numbers of independent decisions like in the example, then what we should do will indeed agree with aggregation, but if we only have to (or expect to) make few such decisions, we don't need to.

As I think you're aware, similar arguments can be made against unbounded utility functions (unbounded expected values), although they require unboundedly many (but still finitely many) decisions to deliver an almost surely worse outcome and the scenarios are not very realistic. We can get infinitely many decisions through acausal influence in a multiverse or in case things go on forever over time, but unbounded stakes across the separate decisions seems hard to defend without getting into extremely speculative and far from standard possibilities. See https://alexanderpruss.blogspot.com/2022/10/expected-utility-maximization.html?m=1 for an abstract example that could apply across separate correlated agents.

Expand full comment
author

Your first paragraph neglects my paragraph that begins: "Note that the (expected) value of each choice is clearly independent of the others..." That's where I argue that "the fact that repeating the choice of concentrated benefits across the whole population results in an overall worse outcome (than the alternative choice of greater distributed benefits) establishes that *each* such choice is worse."

Of course, it remains open to the contractualist to simply *ignore* the evaluative facts, and base their deontic verdicts on global policies in the way you describe. I certainly can't stop someone who brutely insists upon this. My point is just that the rest of us have no reason to follow them: "Given that our anti-aggregative intuitions seem to apply just as strongly to evaluative matters as to deontic ones, and yet are demonstrably mistaken about the former, there’s a real challenge for anti-aggregationists to show why their deontic intuitions should be trusted."

Expand full comment

I think there's a hidden assumption here that someone could dispute: that our evaluations should be transitive and independent of irrelevant alternatives (IIA), but I think that's controversial. Some (interpretations of) person-affecting views would disagree with transitivity or IIA, and many people who find the repugnant conclusion counterintuitive have intuitions that don't satisfy IIA. (Parfit himself seemed sympathetic to person-affecting principles in his later years, at least, too.) In Parfit's example you use here, in worlds where all of the choices will be available, you could order them so that the diffuse benefits beat the concentrated benefits each time, matching the global policy. In other worlds, the order may be different.

If you insist that value is transitive and satisfies IIA, then an anti-aggregationist with the apparently mistaken evaluative intuitions could just respond that their intuitions about Parfit's example are not transitive or don't satisfy IIA, so are not evaluative intuitions at all, or evaluative intuitions of *that kind* (maybe a different kind of value that isn't transitive or doesn't satisfy IIA). Then their evaluative intuitions wouldn't be demonstrably mistaken at all.

"Given that our anti-aggregative intuitions seem to apply just as strongly to evaluative matters as to deontic ones"

Is this true, though? Anti-aggregationists often have reasons based on the wrongness of acts, e.g. imposing large burdens or causing significant harm, that apply more strongly to the deontic than the evaluative, or possibly not at all to the evaluative.

Expand full comment