Michael Caie's 'Calibration and Probabilism' (Guest post by Anna Mahtani)

Michael Caie has a really interesting paper forthcoming in Ergo. The paper is a criticism of van Fraassen's Calibration argument. It's carefully argued and technical, but here I just give the gist of Caie's argument, and highlight a point that I think it would be interesting to pursue.

Here's the rough idea behind van Fraassen's Calibration argument. We can begin with this thought: being calibrated is a 'good-making feature' (as Caie puts it) of an agent's credal state. And to say that an agent's credal state is calibrated is to say, roughly, that the agent's credence in claims of a particular type match the frequency of truths of that type. So here's an example. Suppose that I am looking at a pack of cards. For each card in the pack (call them $c_1$, $c_2$, ... $c_{52}$), I have a credence of 1/4 that that card is a diamond ($D$). So we have a set of claims $\{$$D_{c_1}$, $D_{c_2}$,..., $D_{c_{52}}$$\}$, and I have a credence of 1/4 in each claim in this set. Now in fact (as you would expect), exactly 1/4 of the claims in the set are true - and so my credence matches up with the frequency. Thus we say that my credal state is 'calibrated' over this set of claims. (In fact, calibration does not require my credence to exactly match the frequency; it requires for any $\epsilon > 0$, the difference between my credence and the frequency is lower than $\epsilon$).

Now suppose instead that I am looking at just one card from the pack, say $c_{24}$. Again I have a credence of 1/4 in $D_{c_{24}}$. For whatever reason (I don't know about their existence, I have never considered them, ...) I have no credences about the other cards in the pack. So if we wanted to gather up into a set all my credences in claims of the form $D_{c_i}$, then there would be just one claim in that set: $D_{c_{24}}$. Thus for my credal state to be calibrated over this set, I must have either a credence of 1 in $D_{c_{24}}$ (if $D_{c_{24}}$ is true) or a credence of 0 in $D_{c_{24}}$ (if $D_{c_{24}}$ is false). But then being calibrated no longer seems like such a good-making feature: at least, it isn't something that we should require of every rational agent. What we do then, in Caie's words, is we 'abstract away from the limitations imposed by the numbers of propositions in this class'. We ask: if we added further relevant claims to the set, could we get the agent's credal state to be calibrated over the (new, extended) set? If so, then the agent's credal state is calibratable over the (original, in this case single-member) set. So in our example, we start with the set containing just $D_{c_{24}}$, and add other relevant claims to this set - perhaps $D_{c_1}$ and $D_{c_2}$ for example. (Exactly what makes a claim 'relevant' is an issue that Caie discusses in the paper - here for simplicity I'll skate over this and assume you get the idea). We suppose that just as I have a credence of 1/4 in $D_{c_{24}}$, so I have a credence of 1/4 in all these relevantly similar claims in the new extended set. Now can we - by adding claims in this way - get my credence calibrated over this extended set? In this example, we can: one way to do it would be just to add in relevant claims for all the 51 remaining cards. And because there is an extended set such that my credal state would be calibrated over that extended set - we can say that my credence function is calibratable over the original set.

Van Fraassen's key claim is that if an agent's credence function is not calibratable (over some set over which it is defined), then the agent is irrational. Or - more accurately, an agent is irrational if (s)he has a credal state such that it can be determined a priori that if the agent has that credal state, then it is not calibratable (over some set over which it is defined). Van Fraassen shows that it follows from this principle that a rational agent will obey the probability axioms. This is a welcome result, because the probability axioms are intuitively compelling - and here we have an argument to the conclusion that rational agents' credence functions obey these axioms. What Caie argues in this paper, though, is that unwelcome results also follow from van Fraassen's principle.
The centrepiece of Caie's paper concerns this sentence, where $Cr_a$ refers to Annie's credence function:

(*)        $\neg Cr_a(T(*)) \geq 0.5$

The sentence (T) below is an instance of the T-schema, and plausibly Annie (if rational) has a credence of 1 in (T).

(T)        $T(*) \leftrightarrow \neg Cr_a(T(*)) \geq 0.5$

Caie shows that if Annie has a credence of 1 in (T), and any credence at all in (*), then her credal state is not calibratable. Thus, given van Fraassen's principle, Annie is classed as irrational. But this is an unwelcome result. As Caie rightly points out, (*) is not a liar sentence: it can be, say, true without contradiction. Furthermore Annie can have a credence of 1 in (T) and some credence in (*) without violating the probability axioms. Thus (Caie argues), van Fraassen's principle gives the wrong results here. (In fact, Caie acknowledges that it may seem that Annie's credence state should be classed as irrational if she has some credence in (*). He goes on to show that - given van Fraassen's principle - her credence state is classed as irrational if she has any credence greater than 0 in (T), and this is much harder to swallow).

We can see why if Annie has a credence in (*), then her credal state is not calibratable across the set containing (*) and (T). To see this, let's try to extend the set, and find a set over which Annie's credal state would be calibrated. In extending the set, we can introduce as many sentences $x$ as we like that are relevantly similar to (*); but to count as 'relevantly similar to (*)', these sentences $x$ must be such that $T(x) \leftrightarrow \neg Cr_a(T(x)) \geq 0.5$ holds. Let's start then by supposing that Annie's credence in (*) is less than 0.5. Then, to get her credal state to calibrate, we want to include in the extended set some false sentences $x$, that are relevantly similar to (*). But when we include a sentence $x$ that is relevantly similar to (*), we must suppose that Annie's credence in each $x$ is the same as her credence in (*), and her credence in (*) is less than 0.5. And whenever we have a sentence $x$ relevantly similar to (*), such that Annie's credence in $x$ is less than 0.5, that sentence will be true. This is because $T(x) \leftrightarrow \neg Cr_a(T(x)) \geq 0.5$ holds for all these $x$'s that are relevantly similar to (*).  Thus if Annie's credence in (*) is less than 0.5, then we cannot extend the set containing (*) and (T) to produce a set over which Annie's credal state would be calibrated. We can reason in a parallel way to show that we can't produce an extended set over which Annie's credal state is calibrated if Annie's credence in (*) is greater than or equal to 0.5. So - whatever Annie's credence in (*), her credence can't be calibrated over the set. Thus on van Fraassen's account, she is irrational. As Caie argues, this is an unwelcome result.

This argument from Caie reminds me of Sorensen's discussion of epistemic blindspots. You can have a claim that is perfectly consistent, but such that if you conjoin it with the further claim that it is believed (perhaps by a particular person or at a particular time - depending on the claim), the conjunction is inconsistent. Here is a simple example:

(i)    S does not have any beliefs.

This sentence is perfectly consistent, but conjoin (i) with the claim that S believes (i) and you get an inconsistent conjunction. Thus S cannot (as a matter of logical necessity) truly believe (i). The same goes for this Moorean sentence:

(ii)    'P and S does not believe that P'.

(ii) is consistent, but it would be inconsistent to state the conjunction of (ii) and the claim that S believes (ii). Thus S cannot truly believe (ii). There are even sentences such that S can neither truly believe the sentence, nor truly believe its negation. Here is an example:

 (iii)    S does not believe (iii).

The conjunction of (iii) together with the claim that S believes (iii) is inconsistent, so S cannot truly believe (iii). But it seems that S (if coherent) also cannot truly believe the negation of (iii). For if S believes the negation of (iii), then (if S is coherent) S does not believe (iii) - in which case (iii) is true, and the negation of (iii) is false. It seems then that S (if coherent) cannot either truly believe (iii), or truly believe the negation of (iii). Further, this can be figured out a priori. So doesn't it follow that S is incoherent if (s)he believes either S or its negation?

I don't think this does follow, however. I think that whether an agent's outright belief set counts as irrational should depend simply on whether the contents of his or her beliefs are consistent, and both (iii) and its negation are consistent. We might be tempted to judge whether S's outright belief state is rational by thinking about whether the set of beliefs that S holds are such that S could hold all of these beliefs truly. But this introduces the quirks that we have seen: S can have a perfectly consistent set of beliefs, but because some of these are about her own belief state, we end up classing S as irrational. It seems better to me to judge whether S's outright belief state is rational by thinking about whether the set of beliefs that S holds are such that someone could hold all of those beliefs truly. But even this can lead us astray - for we will still have quirky cases of beliefs that someone has a particular belief. It is better simply to ask whether the content of the beliefs form a consistent set.

The same issue seems to arise in Caie's example. If S has a credence of 1 in (T), and any credence in (*), then Annie's credal state cannot be calibrated. That is, there is no way of extending Annie's credal state in such a way that it can be both Annie's credal state and calibrated. This is why (as Caie shows) on van Fraassen's account, Annie gets classed as irrational. However, there are ways of extending Annie's credal state in such a way that it can be calibrated: it just can't be both calibrated and Annie's credal state. Here then I think we have a new sort of epistemic blindspot: if we accept van Fraassen's account, then it seems that a rational credal state can be defined over both (T) and (*)  - but not if it is Annie's.

One option, then, is to adjust van Fraassen's view to get around this problem. We could require a rational agent's credence state to be such that it can be extended into a calibrated credal state - but not necessarily into a credal state that would be calibrated if it was the agent's credal state. Caie has something to say about why this move is a mistake - but I think it might be worth pursuing.

Comments

  1. Hi Anna, thanks for the blog post! Here are a few thoughts in response.

    As you note, in the paper I argue that there are cases in which, for some agent $S$, some algebra $\mathcal{A}$, and some $p > 0$, given that $S$ has a credal state defined over $\mathcal{A}$, it is impossible for $S$ to be calibrated to within $p$.
    If, then, one endorses the principle (which van Fraassen seems to endorse) that says that it is irrational for an agent to have a credal state if it is not possible for the agent to have that credal state and be calibratable to within $\epsilon$ (for any value $\epsilon > 0$), then it follows that if $S$ has a credal state defined over $\mathcal{A}$ then she is doomed to irrationality.
    This, however, conflicts with a plausible ought-implies-can principle.
    And so, I argue, we should reject the claim that it is irrational for an agent to have a credal state if it is impossible for her to have that credal state and be appropriately calibrated.
    Suppose, however, that credal states, in some sense, constitutively aim at being close to relative frequencies.
    Is there some alternative normative principle that one could endorse that isn't incompatible with ought-implies-can?
    In the paper, I argue that the best thing for a frequentist to say (roughly) is that in those cases in which calibration is precluded for an agent, the agent ought to have a credal state that comes as close as possible to matching the limiting relative frequencies.
    Interestingly, though, in certain cases the credal state with this property will be probabilistically incoherent.

    Another response, however, to the initial problem, which is suggested at the end of your post, is to link the rationality of a credal function $C(\cdot)$ for some agent $S$ not to whether or not it is possible for the agent to have that credal state and be calibrated, but just to whether that credal state itself, considered independently, can be such that there are limiting relative frequencies with which it lines up.
    If one says this, then it follows that one ought to have probabilistically coherent credences.

    I think that this is an interesting option, and is, indeed, the best response if one wants to try to salvage the calibration argument for Probabilism.
    My worry about this option is that it's hard for me to see why we should care about whether or not a credal state itself is calibratable, unless we take calibratability to be a \textit{goal}, something that we ought to strive for.
    But if we think that calibratability is a goal, then what seems to be really normatively relevant in assessing the rationality of a possible credal state $C(\cdot)$ for some agent $S$, is how calibrated $C(\cdot)$ could be were it to be $S$'s credal state, not how calibratable $C(\cdot)$ is in principle.
    Obviously, this is far from decisive.
    But there is, I think, a challenge here to say why calibratability in principle is relevant for assessing the rationality of credal state without appealing to the idea of calibratability being something that our credal states should aim for.
    I, at least, am not certain what that story should look like.

    As you note, the cases I consider are similar in certain respects to Moore paradoxical cases.
    Such propositions are consistent, however, for certain agents, they cannot be truly believed.
    I'm inclined to think that the Moore paradoxical cases in fact support the idea that what's relevant for assessing the rationality of some doxastic state for an agent $S$ is not whether or not that state may in principle have some particular feature, but whether it can have that feature given that the agent has the doxastic state in question.
    After all, it does seem to be irrational for an agent to believe a proposition that is Moore paradoxical for them.
    But that isn't because the proposition can't be true, or because it is impossible for a belief in the relevant proposition to be true.
    It would seem instead to be irrational because that particular agent cannot truly believe the proposition in question.

    CONTINUED BELOW...

    ReplyDelete
  2. It's worth noting that similar issues arise for Joyce's Accuracy-Dominance argument and the Dutch-Book argument.
    In both cases, it is quite tempting to appeal to some putative doxastic goal, or at least, good-making feature.
    In the former case, it is tempting to appeal to the goal of representing the world accurately, in the latter it is tempting to appeal to the good-making feature of having credences which preclude monetary loss.
    In both cases, however, one can show, for reasons that parallel those outlined in ``Calibration and Probabilism", that pursuit of these goals/good-making features is sometimes best served by being probabilistically incoherent.
    (I consider these cases in a paper called ``Rational Probabilistic Incoherence" in Phil Review 2013)
    And in both cases there is an available move which parallels the move you suggest at the end of your post.
    Instead of looking at how accurate a credal state would be were it to be yours, or what monetary exploitation is possible given that a credal state is yours, we simply consider the credal states in the abstract.
    And once we abstract away from limitations imposed given that the credal state is held by some particular individual, the arguments for Probabilism can proceed.
    But, again, it seems to me that there is a challenge here to say why we should care about credal states being accurate or not leading to monetary exploitation unless accurate representation/non-exploitation is a doxastic goal or good-making feature.
    And, again, I'm not sure what a plausible story here would look like.

    The general issue that you raise, then, does seem to me to be important.
    Whether a number of the best known attempts to justify Probabilism are in good standing turns, in part, on whether the type of move that you suggest is ultimately viable.

    Finally, I should note that there are two papers at this years Formal Epistemology Workshop (``Coherence or Accuracy" by Jennifer Carr, and ``The Foundations of Epistemic Decision Theory", by Ben Levinstein and Jason Konek) that explore issues that are related to this question.
    And recent papers that deal with related issues are ``Epistemic Decision Theory" by Hilary Greaves in Mind, and ``Epistemic Teleology and the Separateness of Propositions" by Selim Berker in Phil Review.

    ReplyDelete
  3. Wonderful post and this article tell me Michal how to solve his math question and solving method also helped him in practical thanks for share it study abroad recommendation letter .

    ReplyDelete

Post a Comment