Aggregating incoherent credences: the case of geometric pooling
In the last few posts (here and here), I've been exploring how we should extend the probabilistic aggregation method of linear pooling so that it applies to groups that contain incoherent individuals (which is, let's be honest, just about all groups). And our answer has been this: there are three methods -- linear-pool-then-fix, fix-then-linear-pool, and fix-and-linear-pool-together -- and they agree with one another just in case you fix incoherent credences by taking the nearest coherent credences as measured by squared Euclidean distance. In this post, I ask how we should extend the probabilistic aggregation method of geometric pooling.
As before, I'll just consider the simplest case, where we have two individuals, Adila and Benoit, and they have credence functions -- $c_A$ and $c_B$, respectively -- that are defined for a proposition $X$ and its negation $\overline{X}$. Suppose $c_A$ and $c_B$ are coherent. Then geometric pooling says:
Geometric pooling The aggregation of $c_A$ and $c_B$ is $c$, where
Now, in the case of linear pooling, if $c_A$ or $c_B$ is incoherent, then it is most likely that any linear pool of them is also incoherent. However, in the case of geometric pooling, this is not the case. Linear pooling requires us to take a weighted arithmetic average of the credences we are aggregating. If those credences are coherent, so is their weighted arithmetic average. Thus, if you are considering only coherent credences, there is no need to normalize the weighted arithmetic average after taking it to ensure coherence. However, even if the credences we are aggregating are coherent, their weighted geometric averages are not. Thus, geometric pooling requires that we first take the weighted geometric average of the credences we are pooling and then normalize the result, to ensure that the result is coherent. But this trick works whether or not the original credences are coherent. Thus, we need do nothing more to geometric pooling in order to apply it to incoherent agents.
Nonetheless, questions still arise. What we have shown is that, if we first geometrically pool our two incoherent agents, then the result is in fact coherent and so we don't need to undertake the further step of fixing up the credences to make them coherent. But what if we first choose to fix up our two incoherent agents so that they are coherent, and then geometrically pool them? Does this give the same answer as if we just pooled the incoherent agents? And, similarly, what if we decide to fix and pool together?
Interestingly, the results are exactly the reverse of the results in the case of linear pooling. In that case, if we fix up incoherent credences by taking the coherent credences that minimize squared Euclidean distance, then all three methods agree, whereas if we fix them up by taking the coherent credences that minimize generalized Kullback-Leibler divergence, then sometimes all three methods disagree. In the case of geometric pooling, it is the opposite. Fixing up using generalized KL divergence makes all three methods agree -- that is, pool, fix-then-pool, and fix-and-pool-together all give the same result when we use GKL to measure distance. But fixing up using squared Euclidean distance leads to three separate methods that sometimes all disagree. That is, GKL is the natural distance measure to accompany geometric pooling, while SED is the natural measure to accompany linear pooling.
As before, I'll just consider the simplest case, where we have two individuals, Adila and Benoit, and they have credence functions -- $c_A$ and $c_B$, respectively -- that are defined for a proposition $X$ and its negation $\overline{X}$. Suppose $c_A$ and $c_B$ are coherent. Then geometric pooling says:
Geometric pooling The aggregation of $c_A$ and $c_B$ is $c$, where
- $c(X) = \frac{c_A(X)^\alpha c_B(X)^{1-\alpha}}{c_A(X)^\alpha c_B(X)^{1-\alpha} + c_A(\overline{X})^\alpha c_B(\overline{X})^{1-\alpha}}$
- $c(\overline{X}) = \frac{c_A(\overline{X})^\alpha c_B(\overline{X})^{1-\alpha}}{c_A(X)^\alpha c_B(X)^{1-\alpha} + c_A(\overline{X})^\alpha c_B(\overline{X})^{1-\alpha}}$
Now, in the case of linear pooling, if $c_A$ or $c_B$ is incoherent, then it is most likely that any linear pool of them is also incoherent. However, in the case of geometric pooling, this is not the case. Linear pooling requires us to take a weighted arithmetic average of the credences we are aggregating. If those credences are coherent, so is their weighted arithmetic average. Thus, if you are considering only coherent credences, there is no need to normalize the weighted arithmetic average after taking it to ensure coherence. However, even if the credences we are aggregating are coherent, their weighted geometric averages are not. Thus, geometric pooling requires that we first take the weighted geometric average of the credences we are pooling and then normalize the result, to ensure that the result is coherent. But this trick works whether or not the original credences are coherent. Thus, we need do nothing more to geometric pooling in order to apply it to incoherent agents.
Nonetheless, questions still arise. What we have shown is that, if we first geometrically pool our two incoherent agents, then the result is in fact coherent and so we don't need to undertake the further step of fixing up the credences to make them coherent. But what if we first choose to fix up our two incoherent agents so that they are coherent, and then geometrically pool them? Does this give the same answer as if we just pooled the incoherent agents? And, similarly, what if we decide to fix and pool together?
Interestingly, the results are exactly the reverse of the results in the case of linear pooling. In that case, if we fix up incoherent credences by taking the coherent credences that minimize squared Euclidean distance, then all three methods agree, whereas if we fix them up by taking the coherent credences that minimize generalized Kullback-Leibler divergence, then sometimes all three methods disagree. In the case of geometric pooling, it is the opposite. Fixing up using generalized KL divergence makes all three methods agree -- that is, pool, fix-then-pool, and fix-and-pool-together all give the same result when we use GKL to measure distance. But fixing up using squared Euclidean distance leads to three separate methods that sometimes all disagree. That is, GKL is the natural distance measure to accompany geometric pooling, while SED is the natural measure to accompany linear pooling.
Such posts should be published more often!
ReplyDeletehttps://www.xtrf.eu
Real estate purchase usually is valuable investment therefore it needs accurate analysis and knowledge of local real estate market. We offer real estate search service for clients, interested in investing in real estate in Latvia, Lithuania and Estonia. https://www.baltic-legal.com/business-support-services-eng.htm
ReplyDelete