The Accuracy Dominance Argument for Conditionalization without the Additivity assumption

 For a PDF of this post, see here.

Last week, I explained how you can give an accuracy dominance argument for Probabilism without assuming that your inaccuracy measures are additive -- that is, without assuming that the inaccuracy of a whole credence function is obtained by adding up the inaccuracy of all the individual credences that it assigns. The mathematical result behind that also allows us to give my chance dominance argument for the Principal Principle without assuming additivity, and ditto for my accuracy-based argument for linear pooling. In this post, I turn to another Bayesian norm, namely, Conditionalization. The first accuracy argument for this was given by Hilary Greaves and David Wallace, building on ideas developed by Graham Oddie. It was an expected accuracy argument, and it didn't assume additivity. More recently, Ray Briggs and I offered an accuracy dominance argument for the norm, and we did assume additivity. It's this latter argument I'd like to consider here. I'd like to show how it goes through even without assuming additivity. And indeed I'd like to generalise it at the same time. The generalisation is inspired by a recent paper by Michael Rescorla. In it, Rescorla notes that all the existing arguments for Conditionalization assume that, when your evidence comes in the form of a proposition learned with certainty, that proposition must be true. He then offers a Dutch Book argument for Conditionalization that doesn't make this assumption, and he issues a challenge for other sorts of arguments to do the same. Here, I take up that challenge. To do so, I will offer an argument for what I call the Weak Reflection Principle.

Weak Reflection Principle (WRP) Your current credence function should be a linear combination of the possible future credence functions that you endorse.

A lot might happen between now and tomorrow. I might see new sights, think new thoughts; I might forget things I know today, take mind-altering drugs that enhance or impair my thinking; and so on. So perhaps there is a set of credence functions I think I might have tomorrow. Some of those I'll endorse -- perhaps those that I'd get if I saw certain new things, or enhanced my cognition in various ways. And some of them I'll disavow -- perhaps those that I'd get if I forget certain things, or impaired my cognition. WRP asks you to separate out the wheat from the chaff, and once you've identified the ones you endorse, it tells you that your current credence function should lie within the span of those future ones; it should be in their convex hull; it should be a weighted sum or convex combination of them.

One nice thing about WRP is that it gives back Conditionalization in certain cases. Suppose $c^0$ is my current credence function. Suppose I know that between now and tomorrow I'll learn exactly one member of the partition $E_1, \ldots, E_m$ with certainty --- this is the situation that Greaves and Wallace envisage. And suppose I endorse credence function $c^1$ as a response to learning $E_1$, $c^2$ as a response to learning $E_2$, and so on. Then, if I satisfy WRP, and if $c^k(E_k) = 1$, since I did after all learn it with certainty, then it follows that, whenever $c^0(E_k) > 0$, $c^k(X) = c^0(X | E_k)$, which is exactly what Conditionalization asks of you. And notice that, at no point did we assume that if I learn $E_k$, then $E_k$ is true. So we've answered Rescorla's challenge if we can establish WRP.

To do that, we need Theorem 1 below. And to get there, we need to go via Lemmas 1 and 2. Just to remind ourselves of the framework:

  • $w_1, \ldots, w_n$ are the possible worlds;
  • credence functions are defined on the full algebra built on top of these possible worlds;
  • given a credence function $c$, we write $c_i$ for the credence that $c$ assigns to $w_i$.  

Lemma 1 If $c^0$ is not in the convex combination of $c^1, \ldots, c^m$, then $(c^0, c^1, \ldots, c^m)$ is not in the convex hull of $\mathcal{X}$, where$$\mathcal{X} := \{(w^i, c^1, \ldots, c^{k-1}, w^i, c^{k+1}, \ldots, c^m) : 1 \leq i \leq n\ \&\ 1 \leq k \leq m\}$$

Definition 1 Suppose $\mathfrak{I}$ is a continuous strictly proper inaccuracy measure. Then let$$\mathfrak{D}_\mathfrak{I}((p^0, p^1, \ldots, p^m), (c^0, c^1, \ldots, c^m)) = \sum^m_{k=0} \left ( \sum^n_{i=1} p^k_i \mathfrak{I}(c^k, i) - \sum^n_{i=1} p^k_i \mathfrak{I}(p^k, i) \right )$$

Lemma 2 Suppose $\mathfrak{I}$ is a continuous strictly proper inaccuracy measure. Suppose $\mathcal{X}$ is a closed convex set of $(n+1)$-tuples of probabilistic credence functions. And suppose $(c^0, c^1, \ldots, c^n)$ is not in $\mathcal{X}$. Then there is $(q^0, q^1, \ldots, q^m)$ in $\mathcal{X}$ such that 

(i) for all $(p^0, p^1, \ldots, p^m) \neq (q^0, q^1, \ldots, q^m)$ in $\mathcal{X}$,

$\mathfrak{D}_\mathfrak{I}((q^0, q^1, \ldots, q^m), (c^0, c^1, \ldots, c^m)) <$

$\mathfrak{D}_\mathfrak{I}((p^0, p^1, \ldots, p^m), (c^0, c^1, \ldots, c^m))$;

(ii) for all $(p^0, p^1, \ldots, p^n)$ in $\mathcal{X}$,

$\mathfrak{D}_\mathfrak{I}((p^0, p^1, \ldots, p^m), (c^0, c^1, \ldots, c^m)) \geq$

$\mathfrak{D}_\mathfrak{I}((p^0, p^1, \ldots, p^m), (q^0, q^1, \ldots, q^m))  +$

$\mathfrak{D}_\mathfrak{I}((q^0, q^1, \ldots, q^m), (c^0, c^1, \ldots, c^m))$.

Theorem 1 Suppose each $c^0, c^1, \ldots, c^n$ is a probabilistic credence function. If $c^0$ is not in the convex hull of $c^1, \ldots, c^m$, then there are probabilistic credence functions $q^0, q^1, \ldots, q^m$ such that for all worlds $w_i$ and $1 \leq k \leq m$,$$\mathfrak{I}(q^0, i) + \mathfrak{I}(q^k, i) < \mathfrak{I}(c^0, i) + \mathfrak{I}(c^k, i)$$ 

Let's keep the proofs on ice for a moment. What does this show exactly? It says that, if you don't do as WRP demands, there is some alternative current credence function and, for each of the possible future credence functions in the set you endorse, there is an alternative such that having your current credence function now and then one of your endorsed future credence functions later is guaranteed to make you less accurate overall than having the alternative to your current credence function now and the alternative to that endorsed future credence function later. This, I claim, establishes WRP.

Now for the proofs.

Proof of Lemma 1. We prove the contrapositive. Suppose $(c^0, c^1, \ldots, c^m)$ is in $\mathcal{X}$. Then there are $0 \leq \lambda_{i, k} \leq 1$ such that $\sum^n_{i=1}\sum^m_{k=1}  \lambda_{i, k} = 1$ and$$(c^0, c^1, \ldots, c^m) = \sum^n_{i=1} \sum^m_{k=1}  \lambda_{i, k} (w^i, c^1, \ldots, c^{k-1}, w^i, c^{k+1}, \ldots, c^m)$$Thus,$$c^0 = \sum^n_{i=1}\sum^m_{k=1}  \lambda_{i,k} w^i$$
and$$c^k = \sum^n_{i=1} \lambda_{i, k} w^i +  \sum^n_{i=1} \sum_{l \neq k} \lambda_{i, l} c^k$$So$$(\sum^n_{i=1} \lambda_{i, k}) c^k =  \sum^n_{i=1} \lambda_{i, k} w^i$$So let $\lambda_k =  \sum^n_{i=1} \lambda_{i, k}$. Then, for $1 \leq k \leq m$,$$\lambda_k c_k = \sum^n_{i=1} \lambda_{i, k} w^i$$And thus$$\sum^m_{k=1} \lambda^k c^k = \sum^m_{k=1} \sum^n_{i=1} \lambda_{i, k} w^i = c^0$$as required. $\Box$

Proof of Lemma 2. This proceeds exactly like the corresponding theorem from the previous blogpost. $\Box$

Proof of Theorem 1. So, if $c^0$ is not in the convex hull of $c^1, \ldots, c^m$, there is $(q^0, q^1, \ldots, q^m)$ such that, for all $(p^0, p^1, \ldots, p^m)$ in $\mathcal{X}$,$$\mathfrak{D}((p^0, p^1, \ldots, p^m), (q^0, q^1, \ldots, q^m)) < \mathfrak{D}((p^0, p^1, \ldots, p^m), (c^0, c^1, \ldots, c^m))$$In particular, for any world $w_i$ and $1 \leq k \leq m$,

$\mathfrak{D}((w^i, c^1, \ldots, c^{k-1}, w^i, c^{k+1}, \ldots, c^m), (q^0, q^1, \ldots, q^m)) <$

$\mathfrak{D}((w^i, c^1, \ldots, c^{k-1}, w^i, c^{k+1}, \ldots, c^m), (c^0, c^1, \ldots, c^m))$

But$$\begin{eqnarray*}
& & \mathfrak{I}(q^0, i) + \mathfrak{I}(q^k, i) \\
& = & \mathfrak{D}(w^i, q^0) + \mathfrak{D}(w^i, q^k) \\
& \leq & \mathfrak{D}((w^i, c^1, \ldots, c^{k-1}, w^i, c^{k+1}, \ldots, c^m), (q^0, q^1, \ldots, q^m)) \\
& < & \mathfrak{D}((w^i, c^1, \ldots, c^{k-1}, w^i, c^{k+1}, \ldots, c^m), (c^0, c^1, \ldots, c^m)) \\
& = &  \mathfrak{D}(w^i, c^0) + \mathfrak{D}(w^i, c^k) \\
& = & \mathfrak{I}(c^0, i) + \mathfrak{I}(c^k, i)
\end{eqnarray*}$$as required.

 

Comments

  1. Whether you like it or not, I would also like you not to wrap up your discussion on this part of the argument and continue it until everyone is satisfied. Click here to know more PortoBlend

    ReplyDelete

Post a Comment