3 Comments
Nov 14, 2023·edited Nov 15, 2023Liked by Richard Y Chappell

A big idea:

It seems to me that once you get away from utilitarianism, it's almost inevitable that you're gonna end up being moral particularist to some degree.

So far, moral-particularist theories were basically intractable to "analyze". But, in principle, AI might eventually offer tools to explicitly represent (at least approximations to) ultra-complex moral-particularist theories. How would such models be trained? I guess using experimental philosophy questionnaires to elicit people's intuitions.

The tech is not there yet, but could it ever get there? I'd like to hear from the Wittgenstein-Anscombe-inspired particularists.

e.g. https://academic.oup.com/edited-volume/43987/chapter-abstract/371424801?redirectedFrom=fulltext

Expand full comment

Regarding 2 (as well as 3 and 5), I'd be interested to hear more how your ideas relate to the ideas of "conequentialization" of moral theories and "scalar ethics".

Expand full comment