9 Comments
Mar 29Liked by Richard Y Chappell

Have you seen Tim Williamson' new stuff on heuristics in philosophy?: https://www.philosophy.ox.ac.uk/sitefiles/overfittingdraftch1.pdf

Seems similar in spirit to your last paragraph.

Expand full comment

Excellent article but it frustrates me a bit that it's necessary.

With this epistemic mistake and the negative utilitarian stuff it seems like there's a decent minority of philosophers who just love the idea of being pro end of the world.

I'd rather not speculate on the motivations. Maybe it is just as you say mostly people taking formal arguments too far but that doesn't well explain the fairly unrelated negative utilitarian arguments which are often believed by the same people as the risk aversion arguments (in my experience).

And this would be fine as it's not like weird philosophical views like idk Lewisian modal realism don't get defenders but I've seen this "maybe extinction is good actually" get brought up as an argument in debates on AI risk policy or existential risk mitigation.

Expand full comment

I think an important element is whether the probability of a future full of suffering actually is tiny. That changes calculations a lot.

If you say "do you want to be born, but with a 1 in a million chance of suffering", it sounds silly to refuse. But is 1 in a million the correct number.

If I say "Do you want to be reincarnated in a random animal, given that you have 90% chance to be born as a small fish or bug that will die of hunger a few days after being born, a 9% chance of being a factory farmed animal living in a cramped cage their whole life, and a 1% chance of being a successful human or animal that will reach adulthood", conclusions should be pretty different.

Expand full comment

"Recall that risk-averse decision theories are motivated purely by intuitions about cases." Are we sure that's uncontroversial?

Also worth distinguishing risk-aversion from ambiguity aversion (or maybe you've done this already).

Expand full comment