38 Comments

Chat GPT wrote a poem about this argument.

Person affecting views in population ethics,

Raise objections with their implications,

A barren rock, just as good as utopia?

Such claims bring forth ethical complications.

How can we value an empty place,

Over a world full of love and grace?

Is it fair to equate a void to paradise,

And make both seem like they're in the same place?

The value of life is in its living,

Its richness, joy, and love it's giving,

To suggest a barren rock is equal to utopia,

Is to ignore the beauty in living.

A utopia might not exist today,

But the hope for a better world leads the way,

To settle for less and call it the same,

Is to make morality just a game.

Let us strive for the best we can achieve,

For a world of happiness and love we can conceive,

Where every life has value and meaning,

And barren rocks remain barren, unfeeling.

Expand full comment
Jan 27, 2023Liked by Richard Y Chappell

Sorry to start yet another thread, but I wanted to mention another thought that occurred to me while reading your post against subjectivism:

I agree that "Normative subjectivists have trouble accommodating the datum that all agents have reason to want to avoid future agony" gets at a real problem for subjectivism; but I find it telling that the strongest example you can come up with is avoiding pain. At least for me, my intuitions just really are very asymmetrical with respect to pleasure and pain, and I suspect you picked "avoiding future" agony rather than "achieving future joy" because you have the intuition that the former is a harder bullet to bite than the latter.

I think this asymmetry is why I feel intuitively bound to rank the unfortunate child against the void in a way I don't feel when it comes to the happy child; and why I don't like the idea of us turning ourselves into anti-humans, but I don't have a strong intuitive reaction against us choosing the void--I think our reasons for avoiding pain are much more convincing and inarguable than our reasons for pursuing pleasure.

I think in general, utilitarianism has a harder time working out the details of what should count toward *positive* utility--this may just be my impression, but I'd guess there's a lot more controversy over what counts as well-being, and what sorts of things contribute to it, and in what way, than over what sorts of things contribute to *negative* utility.

I think maybe the reason I think of pleasure and pain as asymmetric, then, is that I find utilitarianism's arguments much more convincing when talking about suffering; so maybe one doesn't need to adopt an extreme view like "all utility functions are bounded above by 0" to explain why it feels more intuitive to reason about preventing suffering than about promoting joy; maybe it's a matter of moral uncertainty: no plausible competitor can think it's good to let someone suffer pointlessly, that's more or less the strongest moral datum we have. But plausible competitors *can* disagree with utilitarian conclusions about well-being.

Expand full comment
Jan 26, 2023·edited Jan 26, 2023Liked by Richard Y Chappell

"One might infer that good states above a sufficient level have diminishing marginal value"

Can't one just restate the original gamble, but now with the Utopia stipulated to have arbitrarily large value, instead of whatever other good it was measured in before? If value itself is the unit of evaluation, then shouldn't a non-risk-averse person be indifferent between a decent world, and a 50/50 gamble with outcomes + N value, - N value, for any N?

Even if you think there is a maximum possible value (which as you note in the other post, has its own problems), it doesn't seem outrageous to me that the maximum would be large enough to admit a gamble of this form that would still be very counterintuitive for most people to actually accept over the alternative.

To the general point: I made a similar argument in the comments to an earlier post on a similar topic, but isn't it enough to note that most people have a preference for Utopia over the void, and argue that Utopia is better on the grounds that it satisfies our current preferences more? Does there need to be an *intrinsic* reason why Utopia is better than the void?

In general, the idea of intrinsic value seems odd to me. What appeals to me about consequentialism and utilitarianism is that they are very person-centric: utility is about what's good *for people*, unlike deontology or divine command or whatever, which center "goodness" somewhere else, somewhere outside of what actually affects and matters to people.

Obviously the above is too naive a conception of utilitarianism to be all that useful: we often face dilemmas where we have to decide how to evaluate situations that are good for some people but not for others, or where we face uncertainty over how good something is, or whether it's good at all, and so we need a more complex theory to help us deal with these issues.

But when contemplating the void, it feels to me like we aren't in one of these situations: there are no people in the void, and so no one for whom it to be good or bad; the only people for whom it can be good or bad are the people now who are contemplating it, and so we should be free to value it however we want, with no worry of our values coming into conflict with those of the people who live in that world. As it happens, we (mostly) currently very strongly dis-prefer the void--but there's no intrinsic reason we have to, and if we were to collectively change our minds on the point, that would be fine.

Expand full comment

“taking it as a premise that positive intrinsic value is possible (utopia is better than a barren rock), “

Is one an application of the other, or are they unrelated? I can think that utopia is better than a barren rock without accepting anything about intrinsic value. Am I just using the terms differently?

How is intrinsic value different from utility? I guess instrumental value counts as utility also, although it derives its utility from the end to which it serves as means.

In this context, would extrinsic value and instrumental value be the same thing?

Expand full comment

I agree with this, with one exception. I think that it is, in fact, possible to argue people out of the 'pleasure isn't good, but pain is bad position.' Among other things, even worse than implying utopia is worse than a barren rock, it implies it would be morally neutral to press a button that would make no future people ever happy again--and that utopia is no better than everyone just living slightly worthwhile lives with no suffering. That a life filled with love, good food, and general joy is no better than musak and potatoes.

Expand full comment

"The fires of the soul are great, and burn with the same light as the stars"

Merlin (HPMOR)

Expand full comment

>"Saying this risks coming off as insulting, but I don’t mean it that way"

>[the next paragraph:] "it’s insane to deny this premise... purely negative ethical views are insane"

(I think the "not being insulting" thing may need a little work here)

Expand full comment

I agree with this, with one exception. I think that it is, in fact, possible to argue people out of the 'pleasure isn't good, but pain is bad position.' Among other things, even worse than implying utopia is worse than a barren rock, it implies it would be morally neutral to press a button that would make no future people ever happy again--and that utopia is no better than everyone just living slightly worthwhile lives with no suffering. That a life filled with love, good food, and general joy is no better than musak and potatoes.

Expand full comment

I don't this question is generally addressable without discussing your meta-ethical views.

If welfare is good because (and only because) we as individuals care about our welfare (call this Premise C) then things being good requires actual people (whether past/present/future) to exist in the first place - otherwise there is no source of value, and no basis on which to judge what is good and what is not.

Note that this isn't necessarily constructivism, insofar as you will have something like "X is good iff it is non-contingently desired" is plausibly mind-independent in the Shafer-Landau sense, hence yielding some kind of moral realism.

The real question, of course, is whether we should accept Premise C. I think, broadly speaking, there are two compelling reasons for it:

(a) At a positive level, we do obviously care about our own lives/freedom/happiness/etc, and as a result these things are good (possess ought-to-be-ness, have reason for existence, whatever). And if you take a step back, and asked what would happen if you didn't care about these things, there *doesn't seem to be any reason for the universe to care* - there doesn't appear to be any reason separate from your caring for these things to matter.

(b) It would be an extremely implausibly metaphysical coincidence that our welfare just happens to be good from the point of view of the universe, separately from us caring about it. For the sake of argument, consider that there metaphysically could be a planet of anti-humans - with the residents here telically desiring the anti-welfare of humans (i.e. that we die, are made slaves, are unhappy etc), and have the same pro-attitudes towards the inverse of the things we have pro-attitudes to. And it's just hard to justify why we would be cosmically right and them cosmically wrong - why it just happens that the stuff we value (and not the stuff the anti-humans value) is what the universe also values in the Mackie/Platonic sense. But perhaps this is just a long-winded way of saying the evolutionary debunking arguments are compelling, unless you have Premise C and some sort a meta-ethical view that links valuation to objective value in a way that (a) avoids the coincidence, and (b) still gets you a sufficiently strong sense of mind-independence as to defeat the radical moral sceptic who keeps asking why we should care about others

Expand full comment
deletedFeb 10, 2023Liked by Richard Y Chappell
Comment deleted
Expand full comment