27 Comments

Thanks for the discussion of my post, Richard! This is interesting argument in favor of the second horn, if we’re willing to pay the cost. I don’t think I accept the general principle the post relies on, but I do interpret my advice to avoid creating AI of disputable moral status as defeasible policy advice rather than strict requirement, which can be outweighed in the right circumstances.

I’m inclined to think there’s an important difference between the human farming case and the baby farm case, though. I discuss my own version of this in the “Argument from Existential Debt” section of Schwitzgebel & Garza 2015. The case: Ana and Vijay would not have a child except under the conditions that they could terminate the child’s life at will. They raise the child happily for 9 years, then kill him painlessly. Is it wrong to have the child under these conditions? And is it wrong, after having had the child, to kill him — or is it like the humane meat case (as interpreted by defenders of that practice)?

Expand full comment
Apr 28, 2023Liked by Richard Y Chappell

I really enjoyed this post! Hilary Greaves has a paper, "Against "the badness of death,"" that also discusses how focusing on merely comparative can warp our thinking

Expand full comment

"I’m not aware of any discussions of the issue at the broad level of generality and applicability that I’ve attempted here; but further references would be most welcome!"

See "Your death might be the worst thing ever to happen to you (but maybe you shouldn't care)" by Travis Timmerman. His Sick Sally and Naive Ned case gets at the same idea. Short version: Sick Sally is inevitably dying of some condition by some particular date. Her friend Naive Ned wants to help her out, so he promises to torture her starting that particular date, if she somehow survives the condition. By promising this, Naive Ned makes Sick Sally's death comparatively good for her instead of comparatively bad for her, since her death allows her to escape his torture. But of course it's completely pointless for Naive Ned to promise to torture her to make her death comparatively good for her. He's not benefitting her at all, because he doesn't change the intrinsic/non-comparative/absolute value outcomes for her. The comparative value of her death doesn't matter. Only the intrinsic/non-comparative/absolute value of her life does.

I wrote about this as well in my BPhil thesis. I call this idea that comparative value doesn't matter in itself the "Axiological Grounding for the No-Difference View." Basically, I claim this assumption is what underlies Parfit's deontic No-Difference View. Coincidentally, today I have been writing a conference abstract on the Axiological Grounding for the No-Difference View! I am close to finishing the abstract but got distracted and then saw this post.

By the way, all this contradicts what you say about death in Value Receptacles. I have some draft of something somewhere in which I refer to your view about death's intrinsic harm in Value Receptacles as a form of opposition to the Axiological Grounding for the No-Difference View which I needed to refute. It sounds like you've come around!

Expand full comment
Mar 17, 2023·edited Mar 17, 2023

(a) Generally agree that merely comparative harms shouldn't bother us.

(b) However, on the specific matter of creation of lives - apologies for beating the same drum again, but I really depends on meta-ethics. I made the same point in the article on Don't Valorize the Void, but basically: If welfare is good because (and only because) we as individuals care about our welfare (Premise C) then things being good requires actual people (whether past/present/future) to exist in the first place - otherwise there is no source of value, and no basis on which to judge what is good and what is not.

The main arguments for Premise C I've made before, so won't rehash them so much as append them as an annex below for anyone who wants to consider them. One other interesting consideration however, is what you can call the Harry Potter Argument.

(Premise 1: Totalist View) The interests of merely contingent people (specifically, people whose existence is contingent on us choosing to create them, vs them existing anyway) matter.

(Premise 2: Non-Experiential Interests/Preferences) People have non-experiential welfare interests/preference. For example, an author might reasonably want his book to have success even after his death; or someone might want their partner to remain faithful even if the infidelity is something they will never know about (or in more extreme cases, have a preference that they/their partner not remarry after one partner's death, on the basis that this is more romantic & better honours the concept of love). I don't believe any of this is too controversial, for those of us who reject hedonism - there's no reason why what we care about has to overlap only with the class of things that affect what we experience.

(Conclusion: Implausible Result) But if the interests of merely contingent people matter, and these people have non-experiential interests, then the non-experiential interests of merely contingent people matter, and we would have reason to advance them, even in the sub-set of cases where *these merely contingent people end up not existing at all*. For example, if a merely contingent person (call him Harry Potter) would - if he existed - would end up marrying a non-contingent real person (call her Ginny Weasley). Harry has a selfish and strong preference that Ginny not marry anyone else after he is dead - and this also extends to having a preference that she not marry anyone else even if he never existed at all. If we took the whole argument seriously, we would have to say that real person Ginny would have at least *some* pro tanto reason not to marry, based on the wishes of a merely hypothetical person - and this, I advance, is implausible.

----- Annex: Main Arguments for Premise C -----

In any case, the main arguments for Premise C are two-fold:

(1) At a positive level, we do obviously care about our own lives/freedom/happiness/etc, and as a result these things are good (possess ought-to-be-ness, have reason for existence, whatever). And if you take a step back, and asked what would happen if you didn't care about these things, there *doesn't seem to be any reason for the universe to care* - there doesn't appear to be any reason separate from your caring for these things to matter.

(2) It would be an extremely implausibly metaphysical coincidence that our welfare just happens to be good from the point of view of the universe, separately from us caring about it. For the sake of argument, consider that there metaphysically could be a planet of anti-humans - with the residents here telically desiring the anti-welfare of humans (i.e. that we die, are made slaves, are unhappy etc), and have the same pro-attitudes towards the inverse of the things we have pro-attitudes to. And it's just hard to justify why we would be cosmically right and them cosmically wrong - why it just happens that the stuff we value (and not the stuff the anti-humans value) is what the universe also values in the Mackie/Platonic sense. In other words, debunking arguments are compelling, unless you have Premise C and some sort a meta-ethical view that links valuation to objective value in a way that (a) avoids the coincidence, and (b) still gets you a sufficiently strong sense of mind-independence as to defeat the radical moral sceptic who keeps asking why we should care about others

You previously raised the issue of the problem of temporary depressives, but (i) most people will have the most plausible desire-based theories of welfare be global/life-cycle (i.e. what does X person, as a chain-of-continuous-consciousness, intrinsically want?). That is to say, from the perspective of who we are over time, XYZ things matter, and normal human blips from depression/fatigue/etc, don't change what these XYZ things are. Moreover, this gets around logical issues like there being infinitely many people at t1, t1.5, 1.25 etc, wanting the same thing Y, such that Y has infinite value.

(ii) I'm not even certain that it makes sense to say that a depressed person doesn't want to be happy. They may not see the meaning of life, because life isn't happy - that doesn't mean they don't see the value of happiness. If you asked any depressed person if they would press a button that would make them magically happy, they would! The upshot is that this wouldn't be a fair/useful intuition pump

Expand full comment

I’m not that sure what I mean either. But something seems to be missing. I’m trying to put my finger on it.

When we discuss the moral assessment of large-scale policies, we can consider how do we know it is best and what institutional structure allows us to implement it, among other things. The topic implicitly addresses questions such as, when is it good to take an action that affects moral agents without their knowledge? Without their consent? If someone became dictator, how would they constrain themselves or even know if they were acting as a benevolent dictator? The discussion seems to assume that everything being discussed is independent of such issues, but I don’t think it is.

Asking moral questions as if once we knew the answer, we would then be justified in unilaterally imposing it on the world seems odd to me, unless I were much more confident of my own judgement and others' ability to be persuaded by good reason.

When would it be wise to impose the best policy on a population that unanimously opposed and misunderstood it? What effect would treating moral agents as moral patients have on them, if we don’t assume they agree with our conclusions and consent? How is the best still the best in such circumstances? I think they would have to be very unusual.

Beings can't consent to being created. But this means that whoever is creating them is acting as their proxy. Such proxies could ask whether they could *expect* that their creations would subsequently grant their retrospective consent, or whatever else they thought respected them as potential moral agents and actual moral patients. What do the proxies owe to their creations if either they calculate incorrectly, or decide by some entirely different criteria?

Expand full comment

I’m trying to put my finger on the reason the point about relative and absolute harms seems so unimportant to me. The post discusses morality. But it never mentions consent.

I see consent as central, but not determinative. Every moral theory has implications regarding consent, or premises about it. The usual approach of this blog dismisses extreme interpretations of utilitarianism, but this depends on this all working out somehow. But does it?

Expand full comment