13 Comments

I think this insight takes the force out of every objection to consequentialism. Very few people think “it would be great if the surgeons hand slipped and they killed the person and distributed their organs but it would be wrong to do that knowingly.” Most objections to consequentialism seem hard to stomach if you imagine that it would be good if the wrong act happened.

Expand full comment
Aug 29, 2023·edited Aug 30, 2023

Lexicalism doesn't necessarily violate transitivity. I think it does violate weak independence, transitivity, the independence of irrelevant alternatives (IIA), completeness *or* certain continuity assumptions. See https://www.researchgate.net/publication/303858540_Value_superiority

Weak independence: If an object e is at least as good as e’, then replacing e’ by e in any whole results in a whole that is at least as good.

Here are some specific transitive and complete lexical views and some of their other properties:

1. Leximin and lexical threshold negative utilitarianism satisfy weak independence and IIA, but violate even weak continuity assumptions.

2. Rank-discounted utilitarianism (e.g. Definition 1 in https://econtheory.org/ojs/index.php/te/article/viewFile/20140629/11607/345) satisfies IIA and some weak continuity assumptions, but violates weak independence.

3. Limited, partial and weak aggregation views can (I'd guess) be made to satisfy weak continuity assumptions (and transitivity), but violate IIA. I'm not sure if they can also be made to satisfy weak independence.

Your sequence argument doesn't work against rank-discounted utilitarianism, in particular, because that view violates weak independence and/or your argument isn't specific enough about the steps in value. For example, if we interpret the aggregated utilities as individual harms or bads and benefits or goods, then for any harm of degree x, there is some slightly lesser harm of degree y and a finite number N so that N harms of degree y are worse than one harm of degree x. Rank-discounted utilitarianism violates a uniform continuity assumption instead. The assumption would be that there is a finite difference in degree d>0 of harm such that no matter how great a harm x<0, an individual harm of degree x-d<x<0 can be outweighed by a finite number of lesser harms of degree x. But to even state this assumption, you need to assume you can measure differences in harms, and it's also much less intuitively obvious because of how abstract it is, and it's not clear why the same d>0 need work for all. (You could generalize it with metric spaces or uniform topological spaces to avoid differences and even distances between harms, but that's even more abstract.)

That being said all of the above views can be interpreted as aggregative in a sense (or possibly different senses), anyway. Leximin and LTNU can be represented by separable utility functions, even if not real-valued, while RDU can be represented with a real-valued utility function. Limited, partial and weak aggregation are by name aggregative. Truly non-aggregative views could be Scanlon's Greater Burden Principle / Regan's harm principle, according to which the greatest individual harm/potential loss is prioritized lexically above all others and which violates IIA, so it can't be represented by a utility function, although it could still be separable if we order lexicographically based on harm.

See also https://centerforreducingsuffering.org/research/lexical-views-without-abrupt-breaks/

Expand full comment
Aug 29, 2023·edited Aug 29, 2023

I don't think the Parfit's argument necessarily demonstrates that our evaluations must be aggregative. Instead, it could be that we should consider whole policies over all future decisions instead of making decisions locally. In the case where we have to (or expect to) make large numbers of independent decisions like in the example, then what we should do will indeed agree with aggregation, but if we only have to (or expect to) make few such decisions, we don't need to.

As I think you're aware, similar arguments can be made against unbounded utility functions (unbounded expected values), although they require unboundedly many (but still finitely many) decisions to deliver an almost surely worse outcome and the scenarios are not very realistic. We can get infinitely many decisions through acausal influence in a multiverse or in case things go on forever over time, but unbounded stakes across the separate decisions seems hard to defend without getting into extremely speculative and far from standard possibilities. See https://alexanderpruss.blogspot.com/2022/10/expected-utility-maximization.html?m=1 for an abstract example that could apply across separate correlated agents.

Expand full comment