Do you have an argument for beneficentrism, or just take it to be self-evident or intuitive? FWIW I would be inclined to accept some version of it, but that's based on various background moral commitments that motivate beneficence as a central moral principle. (E.g., I'm sympathetic to overlapping consensus as a moral methodology, and it aligns with my Christan faith, pace TonyZa's gloss)
"Beneficentrism strikes me as impossible to deny while retaining basic moral decency. Does anyone disagree? Devil's advocates are welcome to comment."
That depends on how you define basic moral decency or if you even believe that there is such a thing as a moral tenant that is universal.
But the truth is that most moral/religious/ideological systems in history didn't place much value on utilitarianism/beneficentrism. Ancient Greeks, Romans, Vikings and Mongols were not big on utilitarianism. Even Christianity, which puts a lot of weight on Works, is first concerned with Faith. Marxists might look utilitarian, but they see class struggle, the Revolution, and the accompanying blood shedding as taking priority with utopia happening only in the last phase. Nietzsche saw utilitarianism as the defining characteristic of a slave morality.
The problem with EA is precisely (one of) the issues you took with deontic minimalism (didn’t murder anyone today, hooray!)
EA maximizes the impact of the least possible effort. We may be great EAs, and we can pat ourselves on the back for spending time to find the best place to donate our 10%, but that is still only 10%! 100% effectiveness of a grape is nothing compared to 10% of a watermelon.
Perhaps EAs could recruit all the watermelons, BUT this IS the problem with EA: in order to be a watermelon (have enough money that your impact is actually an impact) you need to do some rather seedy things.
Instead of trying to figure out how to squeeze the juice equivalent to 10% of watermelon out of grape, EAs should spend their efforts (working and charitable) towards designing and implementing systems which achieve those ends without the sacrifice. And this is possible. It only requires a few watermelons to accept slightly more risk than they are used to, RATHER than all watermelons becoming EAs (fat chance).
If we switch perspective from “cash on hand and what to with it” to “how much more risk can I reasonably take on”, then we will naturally develop solutions to the same challenges EAs have rightly determined we should address (I won’t enumerate).
And to understand fair risk distribution we have to use deontic structures. Utilitarians do not have metrics for risk, only results.
One issue might be that there can develop a bit of a status competition/hierarchy as to who's being the most altruistic and that can be kind of off-putting sometimes.
Do you have an argument for beneficentrism, or just take it to be self-evident or intuitive? FWIW I would be inclined to accept some version of it, but that's based on various background moral commitments that motivate beneficence as a central moral principle. (E.g., I'm sympathetic to overlapping consensus as a moral methodology, and it aligns with my Christan faith, pace TonyZa's gloss)
"Beneficentrism strikes me as impossible to deny while retaining basic moral decency. Does anyone disagree? Devil's advocates are welcome to comment."
That depends on how you define basic moral decency or if you even believe that there is such a thing as a moral tenant that is universal.
But the truth is that most moral/religious/ideological systems in history didn't place much value on utilitarianism/beneficentrism. Ancient Greeks, Romans, Vikings and Mongols were not big on utilitarianism. Even Christianity, which puts a lot of weight on Works, is first concerned with Faith. Marxists might look utilitarian, but they see class struggle, the Revolution, and the accompanying blood shedding as taking priority with utopia happening only in the last phase. Nietzsche saw utilitarianism as the defining characteristic of a slave morality.
I'd love to know if you had any reflection on. what Nadia wrote, which is less a deep philosophy take but more a practical philosophy take of EA and EA adjacent ideas being "ideas machines". Also have you read the recent Larry Temkin book looking at EA thinking (though most wrt to global poverty EA). https://forum.effectivealtruism.org/posts/CbKXqpBzd6s4TTuD7/thoughts-on-nadia-asparouhova-nee-eghbal-essay-on-ea-ideas
The problem with EA is precisely (one of) the issues you took with deontic minimalism (didn’t murder anyone today, hooray!)
EA maximizes the impact of the least possible effort. We may be great EAs, and we can pat ourselves on the back for spending time to find the best place to donate our 10%, but that is still only 10%! 100% effectiveness of a grape is nothing compared to 10% of a watermelon.
Perhaps EAs could recruit all the watermelons, BUT this IS the problem with EA: in order to be a watermelon (have enough money that your impact is actually an impact) you need to do some rather seedy things.
Instead of trying to figure out how to squeeze the juice equivalent to 10% of watermelon out of grape, EAs should spend their efforts (working and charitable) towards designing and implementing systems which achieve those ends without the sacrifice. And this is possible. It only requires a few watermelons to accept slightly more risk than they are used to, RATHER than all watermelons becoming EAs (fat chance).
If we switch perspective from “cash on hand and what to with it” to “how much more risk can I reasonably take on”, then we will naturally develop solutions to the same challenges EAs have rightly determined we should address (I won’t enumerate).
And to understand fair risk distribution we have to use deontic structures. Utilitarians do not have metrics for risk, only results.
Glad to see you on SS!! Enjoy ;)
One issue might be that there can develop a bit of a status competition/hierarchy as to who's being the most altruistic and that can be kind of off-putting sometimes.