Discussion about this post

User's avatar
Daniel Greco's avatar

For both Chris Samp and Some Lawyer, yeah a natural way to avoid the tension is to constrain fallibilism. It's not that you can't be 100% certain about *anything*. Rather, it's that your 100% certainties should be restricted to some limited class of claims. Some Lawyer is suggesting they should just be claims about your experiences. Chris Samp seems happy for certainties to include claims about poll results. But maybe both will agree no certainty in abstract scientific theories.

I've got my reasons for not loving this style of answer, but I don't think they're knock down. For claims about experiences, I tend to think that for facts about what you've experienced to be the sorts of facts that you're comfortable treating as absolutely certain, indubitable, etc., you have to drain them of quite a lot of content. E.g., it can't be something like "I'm hungry", because you might be mistaking hunger for some other kind of stomach pain. And I'm not sure that experiences drained of that much content can play the roles they'd need to as evidence. Certainly it's very hard to imagine actually working with runnable models that treat as evidence stuff about what experiences I have, rather than just the poll results themselves. The state space you'd need for such models would be intractably large and unstructured. So once you make this move, you're necessarily, I think, moving to a version of Bayesianism that's got to be pretty detached from either models we explicitly build for forecasting, or even, I think, the ones we implicitly reason with in everyday life; I tend to think actual reasoning involves taking a lot more for granted--thinking about "small worlds", as Savage said--than this kind of picture suggests.

But my preferred approach leads to puzzles too, because you need non-Bayesian ways of ceasing to treat as certain evidence stuff that you previously were treating as certain evidence. E.g., if you treat as evidence that some poll obtained a certain result, and then later learn that there was a typo in the report, the process by which you "unlearn" the evidence about the poll result can't just be a straightforward Bayesian update in the model you started with.

Expand full comment
Godshatter's avatar

This is a nice point but doesn’t seem too worrying for the Bayesian fallibilist. Because it seems like being certain in one’s evidence is just an artifact of the idealization required to cast learning episodes into this mathematical framework. In order to model these updates, you need certain elements that you treat as fixed for the sake of the model, and then update from there. But you can always go back to revise those elements… you don’t take those to be fixed forever. (Neurath’s boat and all that.) Like you can always treat the evidence as a further hypothesis and then ask what the evidence for that is; eg, is the reporting agency of the polls is usually reliable etc.

Expand full comment
34 more comments...

No posts