Formal epistemology at its best

(Cross-posted at NewAPPS.)

So far, I have not been following developments in formal epistemology very closely, even though the general project has always been in the back of my mind as a possible case-study for my ideas on the methodology of using formal tools in philosophy (and elsewhere). Well, last week I attended two terrific talks in formal epistemology, one by Branden Fitelson (joint work with Kenny Easwaran) in Munich, and one by Jeanne Peijnenburg (joint work with David Atkinson) in Amsterdam. (Full disclosure: Branden is a good friend, and Jeanne is my boss in Groningen! But I’m sure everybody will agree they are among the very best people working on formal epistemology these days.) These two talks illustrate two different ways in which the application of formal methods can be illuminating for the analysis of epistemological concepts and theories, and thus confirmed my hunch that formal epistemology can be a good case study for a more general reflection on formal methodology.

Let me start with Branden’s talk, 'An 'evidentialist' worry about Joyce's argument for probabilism'. The starting point was the preface paradox, and how (in its ‘bad’ versions) it seems to represent a conflict between evidential norms and coherence/accuracy norms. We all seem to agree that both coherence/accuracy norms and evidential norms have a normative grip over our concept of knowledge, but if they are in conflict with one another (as made patent by preface-like cases), then it looks like we are in trouble: either our notion of knowledge is somewhat incoherent, or there can’t be such thing as knowledge satisfying these different, conflicting constraints. Now, according to Branden (and Kenny), Jim Joyce’s move towards a probabilistic account of knowledge is to a large extent motivated by the belief that the probabilistic framework allows for the dissolution of the tension/conflict between the different kinds of epistemic norms, and thus restores peace in the kingdom.

However, through an ingenious but not particularly complicated argument (relying on some ‘toy examples’), Branden and Kenny show that, while Joyce’s accuracy-dominance approach to grounding a probabilistic coherence norm for credences is able to resist the old ‘evidentialist’ threats of the preface-kind, new evidentialist challenges can be formulated within the Joycian framework itself. (I refer the reader to the paper and the handout of the presentation for details.) At Q&A, I mentioned to Branden that this looks a lot like what we’ve had with respect to the Liar paradox in recent decades: as is well known, with classical logic and a naïve theory of truth, paradox is just around the corner, which has motivated a number of people to develop ‘fancy’ formal frameworks in which paradox could be avoided (Kripke’s gappy approach, Priest’s glutty approach, supervaluationism, what have you). But then, virtually all of these frameworks then see the emergence of new and even more deadly forms of paradox – what is referred to as the ‘revenge’ phenomenon. What Branden and Kenny’s work seemed to be illustrating is that the Joycean probabilistic framework is not immune to revenge-like phenomena; the preface paradox strikes again, in new clothes. Branden seemed to agree with my assessment of the situation, and concluded that one of the upshots of these results is that there seems to be something fishy with how the different kinds of epistemic norms interact on a conceptual level, which cannot be addressed simply by switching to a clever, fancy formalism. In other words, probabilism is great, but it will not make this very problem go away.

This might seem like a negative conclusion with respect to the fruitfulness of applying formal methods in epistemology, but in fact the main thing to notice is that Branden and Kenny’s results emerge precisely from the formal machinery they deploy. Indeed, one of the most fascinating features of formal methods generally speaking is that they seem to be able to probe and explore their own limitations: Gödel’s incompleteness results, Arrow’s impossibility theorem, and so many other revealing examples. It is precisely by deploying these formal methods that Branden and Kenny can then conclude that more conceptual discussion on how the different kinds of epistemic norms interact is required.

Three days later, I attended Jeanne’s talk at the DIP-colloquium in Amsterdam (the colloquium I used to run when I was still working there). The title of the talk is great, ‘Turtle epistemology’, which of course refers to the famous anecdote ‘it’s turtles all the way down!’. Jeanne and her co-author David are interested in all kinds of regress phenomena in epistemology, in particular in the foundationalist claim that infinite regress makes any justification impossible. I quote from the abstract:

The regress problem in epistemology traditionally takes the form of a one-dimensional epistemic chain, in which (a belief in) a proposition p1 is epistemically justified by (a belief in) p2, which in turn is justified by (a belief in) p3, and so on. Because the chain does not have a final link from which the justification springs, it seems that there can be no justification for p1 at all. In this talk we will explain that the problem can be solved if we take seriously what is nowadays routinely assumed, namely that epistemic justification is probabilistic in character. In probabilistic epistemology, turtles can go all the way down.

They start with a formulation of justification in probabilistic terms, more specifically in terms conditional probabilities: proposition En+1 probabilistically supports En if and only if En is more probable if En+1 is true than if it is false.

P (En | En+1) > P (En | ~En+1)

The rule of total probability then becomes:

P (En) = P (En | En+1) P (En+1) + P (En | ~En+1) P (~En+1)

Again through an ingenious and very elegant argument, Jeanne and David then formulate infinite chains of conditional probabilities, but show that it is simply not true that they do not yield a determinate probability to the proposition in question. This is because, the longer the chain, and thus the further away the ‘ur-proposition’ is (the one we cannot get to because the chain is infinite), the smaller its influence on the total probability of E0. At the limit, it gets cancelled out, as it is multiplied by a number that tends to 0 (for details, check their paper here, which appeared in the Notre Dame Journal of Formal Logic).

The moral I drew from their results is that, contrary to the classic, foundational axiomatic conception of knowledge and science, the firmness of our beliefs is in fact not primarily grounded in the very basic beliefs all the way down in the chain, i.e. the ‘first truths’ (Aristotle’s Arché). Rather, their influence becomes smaller and smaller as we go up the chain. At this point, there seem to be two basic options: either we must accept that the classical foundationalist picture is wrong, or we reject the probabilistic analysis of justification as in fact capturing our fundamental concept of knowledge. Either way, this particular formal analysis was able to unpack the consequences of adopting a probabilistic framework, and to show not only that in this setting, infinite regress need not be an insurmountable problem, but also that the epistemic weight of ‘basic truths’ may be much less significant than is usually thought. In a sense, this seems to me to be an example of Carnapian explication, where the deployment of formal methods can in fact unravel aspects of our concept of knowledge that we were not aware of.

Thus, these two talks seemed to me to illustrate the strength of formal methodologies at their best: in investigating their own limits, and in unpacking features of some of our concepts that are nevertheless ‘hidden’, buried under some of their more superficial layers. I guess I’m starting to like formal epistemology…

Comments

  1. Thanks for the terrific post Catarina! It's funny how you saw this as a "revenge" type phenomenon. I didn't mention this in the talk, but we sometimes do refer to this problem as "Kolodny's revenge" (since we see it as a way for an 'evidentialist' like Kolodny to get a foothold in the context of arguments for probabilistic coherence norms/requirements of rationality). And, I really like the work Jeanne and David are doing on infinite epsitemic chains. My colleague Peter Klein has been arguing for infinitism for a few years now, so I've become more sympathetic to it. It's nice that Jeanne and David have placed this intuitive Kleinian idea on a firm theoretical foundation.

    ReplyDelete
  2. Hi Branden, I'm glad you liked the post :) I think a lot about the epistemic value of bringing in formal models, and revenge can be seen as a general, recurrent phenomenon: you bring in the fancy formalism, but as it turns out the problems the formalism was supposed to deal with somehow re-emerge! That seemed to me to be one of the main points of your and Kenny's results.

    As for Jeanne and David, their work is awesome! I'm so proud to be a colleague of theirs in Groningen. It's funny that both you and Jeanne said you were flattered to be mentioned together in the same blog post, so the admiration is mutual :)

    ReplyDelete
  3. Any chance of remarks relating the talks to Hendricks' _Mainstream and Formal Epistemology_?

    ReplyDelete
  4. @Donald, I haven't read this text, but I'll keep it in mind! The relations between 'traditional' and formal epistemology are indeed worth discussing in more detail. Trouble is, I'm not much of a fan of traditional epistemology for the most part...

    ReplyDelete

Post a Comment