10 Comments

I lean towards being anti confidence policing, but anyone with a credence in it being bad--given expert disagreement--above 70% is a dogmatist.

Expand full comment
Apr 23Liked by Richard Y Chappell

Confidence-policing is a new concept to me:

> Rather, it’s the purely procedural criticism of their making a confident assertion as such. The thought seems to be that observers can recognize this as epistemically out of bounds without themselves knowing anything about the issue at all.

A month ago I made a bet against Roman Yampolskiy that his credence of existential catastrophe from AI is too high. I claim that a layperson without specific knowledge of AI risk can know that his credence is too high. In this case, I think "confidence-policing" by this person would be valid. Do you agree? Is there something different going on here making this not confidence-policing?

My bet (archive.is/izuQ2):

"I bet Roman Yampolskiy that his p(doom from AI), which is currently equal to 99.999999%, will drop below 99.99% by January 1st, 2030. If I am right, Roman owes me $100. If I am wrong, and his credence does not drop below 99.99% by 2030, then I owe him $1.

""Bet accepted. To summarize the terms: If your estimate of existential risk due to AI drops from your current credence of 99.999999% to less than 99.99% before 2030, you owe me $100. If it does not drop below 99.99%, I will owe you $1 on January 1st, 2030.

Your current credence, p(doom)=99.999999%, implies that there is at most a 0.01% chance that your credence will ever drop to 99.99%, which is why you risking your $100 to my $1 seems profitable to you in expectation.

On the other hand, this bet seems profitable to me because I think your p(doom)=99.999999% is irrationally high and think that there is >1% chance that you will recognize this, say ""oops"", and throw the number out in favor of a much more reasonable p(doom)<99.99%."""

Expand full comment
Dec 18, 2023Liked by Richard Y Chappell

I wonder whether what's going on in these cases is a kind of higher-order evidence thing. When we pronounce on controversial, disputed issues we (often) know that other people have different views to us, or will almost certainly come to a different view on the issue even after considering our arguments. And often those people are our epistemic peers, or at least not epistemically dismissible. So this is evidence that people with the kind of epistemic capacities we have aren't terribly reliable at evaluating the evidence. And that's evidence that, when I give my honest, considered views on such issues, there's a pretty good chance I have misevaluated the evidence. And that should lead me to have low confidence in such matters. So when someone has a very high confidence in such a matter, they're either just ignoring the higher order evidence or, insultingly, presupposing that nobody who disagrees with them is an epistemic peer.

To take a concrete case: when I publish a philosophy paper advancing some view, I can be pretty sure (on the basis of induction) that many many excellent philosophers will disagree with my view. They might find the view interesting, but it is hardly likely to win universal or even majority assent. I should take that into account, so a high confidence in my view would be unjustified. And third-parties, even when they haven't read my (fantastic) paper, can predict all this too, and so they are licensed in thinking I'd be a bit epistemically arrogant if I was very confident in the views I published. So they're usually not doing anything wrong for criticizing me for my high confidence. Either I'm misevaluating the higher-order evidence, or I'm just assuming all other philosophers are my epistemic inferiors.

Expand full comment

I've come across your substack from a 2008 blog post criticising suspension of disbelief as a not admirable position (https://www.philosophyetc.net/2008/07/why-suspend-judgment.html?lr=1&m=1). I wrote a comment that couldn't be posted (I've pasted it below), and reading this post alongside that makes me want to ask a question: Why do you do philosophy, was is its aim? I'm not asking why people do philosophy or why philosophy has value, I'm rather asking you, the author of this substack, Richard Y Chappell, why do you do philosophy? And if you think that's a bad or uninteresting question, why is it?

My original comment:

I apologize for commenting so long after this was written, but I am struck by the word "admirable", which if I've learned anything from linguistics and economics displays a revealed preference for the approval of others. Ataraxia, however, is a therapeutic goal: it provides peace as an alternative to the cocophony of an endless debate.

Could it be there are different goals at play for your imagined hypothetical opponent here? Sure the judgment suspender is less admirable to you and to others, maybe many others, but I think Pyrrhonists would respond: Sure it's less admirable; I don't care, philosophy isn't a tool to gain the admiration of others for me even if it is for you.

Expand full comment
User was banned for this comment. Show
Expand full comment