I noted that there are two natural ways to aggregate $c_A$ and $c_B$ for someone who adheres to Probabilism, the principle that says that credences should be coherent. You might first fix up Adila's and Benoit's credences so that they are coherent, and then aggregate them using linear pooling -- let's call that

*fix-*

*then-pool*. Or you might aggregate Adila's and Benoit's credences using linear pooling, and then fix up the pooled credences so that they are coherent -- let's call that

*pool-*

*then-fix*. And I noted that, for some natural ways of fixing up incoherent credences, fix-then-pool gives a different result from pool-then-fix. This, I claimed, creates a dilemma for the person doing the aggregating, since there seems to be no principled reason to favour either method.

How do we fix up incoherent credences? Well, a natural idea is to find the coherent credences that are closest to them and adopt those in their place. This obviously requires a measure of distance between two credence functions. In last week's post, I considered two:

**Squared Euclidean Distance (SED)**For two credence functions $c$, $c'$ defined on a set of propositions $X_1$, $\ldots$, $X_n$,$$SED(c, c') = \sum^n_{i=1} (c(X_i) - c'(X_i))^2$$

**Generalized Kullback-Leibler Divergence (GKL)**For two credence functions $c$, $c'$ defined on a set of propositions $X_1$, $\ldots$, $X_n$,$$GKL(c, c') = \sum^n_{i=1} c(X_i) \mathrm{log}\frac{c(X_i)}{c'(X_i)} - \sum^n_{i=1} c(X_i) + \sum^n_{i=1} c'(X_i)$$

If we use $SED$ when we are fixing incoherent credences -- that is, if we fix an incoherent credence function $c$ by adopting the coherent credence function $c^*$ for which $SED(c^*, c)$ is minimal -- then fix-then-pool gives

*the same results*as pool-then-fix.

If we use GKL when we are fixing incoherent credences -- that is, if we fix an incoherent credence function $c$ by adopting the coherent credence function $c^*$ for which $GKL(c^*, c)$ is minimal -- then fix-then-pool gives

*different results*from pool-then-fix.

Since last week's post, I've been reading this paper by Joel Predd, Daniel Osherson, Sanjeev Kulkarni, and Vincent Poor. They suggest that we pool and fix incoherent credences in one go using a method called the Coherent Aggregation Principle (CAP), formulated in this paper by Daniel Osherson and Moshe Vardi. In its original version, CAP says that we should aggregate Adila's and Benoit's credences by taking the coherent credence function $c$ such that the sum of the distance of $c$ from $c_A$ and the distance of $c$ from $c_B$ is minimized. That is,

**CAP**Given a measure of distance $D$ between credence functions, we should pick that coherent credence function $c$ such that minimizes $D(c, c_A) + D(c, c_B)$.

As they note, if we take $SED$ to be our measure of distance, then this method generalizes the aggregation procedure on coherent credences that just takes straight averages of credences. That is, CAP entails unweighted linear pooling:

**Unweighted Linear Pooling**If $c_A$ and $c_B$ are coherent, then the aggregation of $c_A$ and $c_B$ is $$\frac{1}{2} c_A + \frac{1}{2}c_B$$

We can generalize this result a little by taking a weighted sum of the distances, rather than the straight sum.

**Weighted CAP**Given a measure of distance $D$ between credence functions, and given $0 \leq \alpha leq 1$, we should pick the coherent credence function $c$ that minimizes $\alpha D(c, c_A) + (1-\alpha)D(c, c_B)$.

If we take $SED$ to measure the distance between credence functions, then this method generalizes linear pooling. That is, Weighted CAP entails linear pooling:

**Linear Pooling**If $c_A$ and $c_B$ are coherent, then the aggregation of $c_A$ and $c_B$ is $$\alpha c_A + (1-\alpha)c_B$$ for some $0 \leq \alpha \leq 1$.

What's more, when distance is measured by $SED$, Weighted CAP agrees with fix-then-pool and with pool-then-fix (providing the fixing is done using $SED$ as well). Thus, when we use $SED$, all of the methods for aggregating incoherent credences that we've considered agree. In particular, they all recommend the following credence in $X$: $$\frac{1}{2} + \frac{\alpha(c_A(X)-c_A(\overline{X})) + (1-\alpha)(c_B(X) - c_B(\overline{X}))}{2}$$

However, the story is not nearly so neat and tidy if we measure the distance between two credence functions using $GKL$. Here's the credence in $X$ recommended by fix-then-pool:$$\alpha \frac{c_A(X)}{c_A(X) + c_A(\overline{X})} + (1-\alpha)\frac{c_B(X)}{c_B(X) + c_B(\overline{X})}$$ Here's the credence in $X$ recommended by pool-then-fix: $$\frac{\alpha c_A(X) + (1-\alpha)c_B(X)}{\alpha (c_A(X) + c_A(\overline{X})) + (1-\alpha)(c_B(X) + c_B(\overline{X}))}$$ And here's the credence in $X$ recommended by Weighted CAP: $$\frac{c_A(X)^\alpha c_B(X)^{1-\alpha}}{c_A(X)^\alpha c_B(X)^{1-\alpha} + c_A(\overline{X})^\alpha c_B(\overline{X})^{1-\alpha}}$$ For many values of $\alpha$, $c_A(X)$, $c_A(\overline{X})$, $c_B(X)$, $c_B(\overline{X})$ these will give three distinct results.

Brain chemistry is a new study and this is also tell us about relationship and it help us to understand our behavior and relation with our life partner. capstone electronics

ReplyDeleteI wondered if the Numberophile people would like to take a look at 1183851354519 as it spells 'arche-medes' which may be one of the most mathematical words in my opinion. Maybe all it spells are dark numbers.

ReplyDeleteActually, I think Arche-median 11838513549114 may be more interesting.

ReplyDelete