tag:blogger.com,1999:blog-49876091144152055932018-03-11T12:14:30.728+00:00M-PhiA blog dedicated to mathematical philosophy.Jeffrey Ketlandhttp://www.blogger.com/profile/01753975411670884721noreply@blogger.comBlogger547125tag:blogger.com,1999:blog-4987609114415205593.post-38613388398177303672018-02-12T20:31:00.000+00:002018-02-12T20:33:09.932+00:00An almost-Dutch Book argument for the Principal PrinciplePeople often talk about the synchronic <a href="https://plato.stanford.edu/entries/dutch-book/#BasiDutcBookArguForProb" target="_blank">Dutch Book argument for Probabilism</a> and the <a href="https://plato.stanford.edu/entries/dutch-book/#DiacDutcBookArgu" target="_blank">diachronic Dutch Strategy argument for Conditionalization</a>. But the synchronic Dutch Book argument for the Principal Principle is mentioned less. That's perhaps because, in one sense, there couldn't possibly be such an argument. As the Converse Dutch Book Theorem shows, providing you satisfy Probabilism, there can be no Dutch Book made against you -- that is, there is no sets of bets, each of which you will consider fair or favourable on its own, but which, when taken together, lead to a sure loss for you. So you can violate the Principal Principle without being vulnerable to a sure loss, providing your satisfy Probabilism. However, there is a related argument for the Principal Principle. And conversations with a couple of philosophers recently made me think it might be worth laying it out.<br /><br />Here is the result on which the argument is based:<br /><br />(I) Suppose your credences violate the Principal Principle but satisfy Probabilism. Then there is a book of bets and a price such that: (i) you consider that price favourable for that book -- that is, your subjective expectation of the total net gain is positive; (ii) every possible objective chance function considers that price unfavourable -- that is, the objective expectation of the total net gain is guaranteed to be negative.<br /><br />(II) Suppose your credences satisfy both the Principal Principle and Probabilism. Then there is no book of bets and a price such that: (i) you consider that price favourable for that book; (ii) every possible objective chance function considers that price unfavourable.<br /><br />Put another way:<br /><br />(I') Suppose your credences violate the Principal Principle. There are two actions $a$ and $b$ such that: you prefer $b$ to $a$, but every possible objective chance function prefers $a$ to $b$.<br /><br />(II') Suppose your credences satisfy the Principal Principle. For any two actions $a$ and $b$: if every possible objective chance function prefers $a$ to $b$, then you prefer $a$ to $b$.<br /><br />To move from (I) and (II) to (I') and (II'), let $a$ be the action of accepting the bets in $B$ and let $b$ be the action of rejecting them. <br /><br />The proof splits into two parts:<br /><br />(1) First, we note that a credence function $c$ satisfies the Principal Principle iff $c$ is in the closed convex hull of the set of possible chance functions.<br /><br />(2) Second, we prove that:<br /><br />(2I) If a probability function $c$ lies outside the closed convex hull of a set of probability functions $\mathcal{X}$, then there is a book of bets and a price such the expected total net gain from that book at that price by the lights of $c$ is positive, while the expected total net gain from that book at that price by the lights of each $p$ in $\mathcal{X}$ is negative.<br /><br />(2II) If a probability function $c$ lies inside the closed convex hull of a set of probability functions $\mathcal{X}$, then there is no book of bets and a price such the expected total net gain from that book at that price by the lights of $c$ is positive, while the expected total net gain from that book at that price by the lights of each $p$ in $\mathcal{X}$ is negative.<br /><br />Here's the proof of (2), which I lift from my <a href="https://drive.google.com/file/d/11hxCUJAKLk7_6_WARz56z6ITm9lX4U5y/view" target="_blank">recent justification of linear pooling</a> -- the same technique is applicable since the Principal Principle essentially says that you should set your credences by applying linear pooling to the possible objective chances.<br /><br />First:<br /><ul><li>Let $\Omega$ be the set of possible worlds</li><li>Let $\mathcal{F} = \{X_1, \ldots, X_n\}$ be the set of propositions over which our probability functions are defined. So each $X_i$ is a subset of $\Omega$.</li></ul>Now:<br /><ul><li>We represent a probability function $p$ defined on $\mathcal{F}$ as a vector in $\mathbb{R}^n$, namely, $p = \langle p(X_1), \ldots, p(X_n)\rangle$.</li><li>Given a proposition $X$ in $\mathcal{F}$ and a stake $S$ in $\mathbb{R}$, we define the bet $B_{X, S}$ as follows: $$B_{X, S}(\omega) = \left \{ \begin{array}{ll}<br />S & \mbox{if } \omega \in X \\<br />0 & \mbox{if } \omega \not \in X<br />\end{array}<br />\right.$$ So $B_{X, S}$ pays out $S$ if $X$ is true and $0$ if $X$ is false.</li><li>We represent the book of bets $\sum^n_{i=1} B_{X_i, S_i}$ as a vector in $\mathbb{R}^n$, namely, $S = \langle S_1, \ldots, S_n\rangle$. </li></ul><br /><b>Lemma 1</b><br />If $p$ is a probability function on $\mathcal{F}$, the expected payoff of the book of bets $\sum^n_{i=1} B_{X_i, S_i}$ by the lights of $p$ is $$S \cdot p = \sum^n_{i=1} p(X_i)S_i$$<br /><b>Lemma 2</b><br />Suppose $c$ is a probability function on $\mathcal{F}$, $\mathcal{X}$ is a set of probability functions on $\mathcal{F}$, and $\mathcal{X}^+$ is the closed convex hull of $\mathcal{X}$. Then, if $c \not \in \mathcal{X}^+$, then there is a vector $S$ and $\varepsilon > 0$ such that, for all $p$ in $\mathcal{X}$, $$S \cdot p < S \cdot c - \varepsilon$$<br /><i>Proof of Lemma</i> <i>2</i>. Suppose $c \not \in \mathcal{X}^+$. Then let $c^*$ be the closest point in $\mathcal{X}^+$ to $c$. Then let $S = c - c^*$. Then, for any $p$ in $\mathcal{X}$, the angle $\theta$ between $S$ and $p - c$ is obtuse and thus $\mathrm{cos}\, \theta < 0$. So, since $S \cdot (p - c) = ||S||\, ||x - p|| \mathrm{cos}\, \theta$ and $||S||, ||p - c|| > 0$, we have $S \cdot (p - c) < 0$. And hence $S \cdot p < S \cdot c$. What's more, since $\mathcal{X}^+$ is closed, $p$ is not a limit point of $\mathcal{X}^+$, and thus there is $\delta > 0$ such that $||p - c|| > \delta$ for all $p$ in $\mathcal{X}$. Thus, there is $\varepsilon > 0$ such that $S \cdot p < S \cdot c - \varepsilon$, for all $p$ in $\mathcal{X}$.<br /><br />We now derive (2I) and (2II) from Lemmas 1 and 2:<br /><br />Let $\mathcal{X}$ be the set of possible objective chance functions. If $c$ violates the Principal Principle, then $c$ is not in $\mathcal{X}^+$. Thus, by Lemma 2, there is a book of bets $\sum^n_{i=1} B_{X_i, S_i}$ and $\varepsilon > 0$ such that, for any objective chance function $p$ in $\mathcal{X}$, $S \cdot p < S \cdot c - \varepsilon$. By Lemma 1, $S \cdot p$ is the expected payout of the book of bets by the lights of $p$, while $S \cdot c$ is the expected payout of the book of bets by the lights of $c$. Now, suppose we were to offer an agent with credence function $c$ the book of bets $\sum^n_{i=1} B_{X_i, S_i}$ for the price of $S \cdot c - \frac{\varepsilon}{2}$. Then this would have positive expected payoff by the lights of $c$, but negative expected payoff by the lights of each $p$ in $\mathcal{X}$. This gives (2I).<br /><br />(2II) then holds because, when $c$ is in the closed convex hull of $\mathcal{X}$, its expectation of a random variable is in the closed convex hull of the expectations of that random variable by the lights of the probability functions in $\mathcal{X}$. Thus, if the expectation of a random variable is negative by the lights of all the probability functions in $\mathcal{X}$, then its expectation by the lights of $c$ is not positive.<br /><br /><br />Richard Pettigrewhttp://www.blogger.com/profile/07828399117450825734noreply@blogger.com0tag:blogger.com,1999:blog-4987609114415205593.post-63562356791457858572018-01-01T20:39:00.000+00:002018-01-02T09:04:52.222+00:00A Dutch Book argument for linear poolingOften, we wish to aggregate the probabilistic opinions of different agents. They might be experts on the effects of housing policy on people sleeping rough, for instance, and we might wish to produce from their different probabilistic opinions an aggregate opinion that we can use to guide policymaking. Methods for undertaking such aggregation are called <i>pooling operators</i>. They take as their input a sequence of probability functions $c_1, \ldots, c_n$, all defined on the same set of propositions, $\mathcal{F}$. And they give as their output a single probability function $c$, also defined on $\mathcal{F}$, which is the aggregate of $c_1, \ldots, c_n$. (If the experts have non-probabilistic credences and if they have credences defined on different sets of propositions or events, problems arise -- I've written about these <a href="http://m-phi.blogspot.co.uk/2017/03/a-dilemma-for-judgment-aggregation.html" target="_blank">here</a> and <a href="http://m-phi.blogspot.co.uk/2017/09/aggregating-abstaining-experts.html" target="_blank">here</a>.) Perhaps the simplest are the <i>linear pooling operators</i>. Given a set of non-negative weights, $\alpha_1, \ldots, \alpha_n \leq 1$ that sum to 1, one for each probability function to be aggregated, the linear pool of $c_1, \ldots, c_n$ with these weights is: $c = \alpha_1 c_1 + \ldots + \alpha_n c_n$. So the probability that the aggregate assigns to a proposition (or event) is the weighted average of the probabilities that the individuals assign to that proposition (event) with the weights $\alpha_1, \ldots, \alpha_n$.<br /><br />Linear pooling has had a hard time recently. <a href="http://onlinelibrary.wiley.com/doi/10.1111/nous.12143/abstract" target="_blank">Elkin and Wheeler</a> reminded us that linear pooling almost never preserves unanimous judgments of independence; <a href="https://link.springer.com/article/10.1007/s11098-014-0350-8" target="_blank">Russell, et al.</a> reminded us that it almost never commutes with Bayesian conditionalization; and <a href="http://eprints.lse.ac.uk/80762/1/Bradley_Learning%20from%20others_2017.pdf" target="_blank">Bradley</a> showed that aggregating a group of experts using linear pooling almost never gives the same result as you would obtain from updating your own probabilities in the usual Bayesian way when you learn the probabilities of those experts. I've tried to defend linear pooling against the first two attacks <a href="https://drive.google.com/file/d/0B-Gzj6gcSXKrWHNLZzF6TERraWc/view" target="_blank">here</a>. In that paper, I also offer a positive argument in favour of that aggregation method: I argue that, if your aggregate is not a result of linear pooling, there will be an alternative aggregate that each experts expects to be more accurate than yours; if your aggregate is a result of linear pooling, this can't happen. Thus, my argument is a non-pragmatic, accuracy-based argument, in the same vein as Jim Joyce's non-pragmatic vindication of probabilism. In this post, I offer an alternative, pragmatic, Dutch book-style defence, in the same vein as the standard Ramsey-de Finetti argument for probabilism.<br /><br />My argument is based on the following fact: <b>if your aggregate probability function is not a result of linear pooling, there will be a series of bets that the aggregate will consider fair but which each expert will expect to lose money (or utility); if your aggregate is a result of linear pooling, this can't happen.</b> Since one of the things we might wish to use an aggregate to do is to help us make communal decisions, a putative aggregate cannot be considered acceptable if it will lead us to make a binary choice one way when every expert agrees that it should be made the other way. Thus, we should aggregate credences using a linear pooling operator.<br /><br />We now prove the mathematical fact behind the argument, namely, that if $c$ is not a linear pool of $c_1, \ldots, c_n$, then there is a bet that $c$ will consider fair, and yet each $c_i$ will expect it to lose money; the converse is straightforward.<br /><br />Suppose $\mathcal{F} = \{X_1, \ldots, X_m\}$. Then:<br /><ul><li>We can represent a probability function $c$ on $\mathcal{F}$ as a vector in $\mathbb{R}^m$, namely, $c = \langle c(X_1), \ldots, c(X_m)\rangle$.</li><li>We can also represent a book of bets on the propositions in $\mathcal{F}$ by a vector in $\mathbb{R}^m$, namely, $S = \langle S_1, \ldots, S_m\rangle$, where $S_i$ is the stake of the bet on $X_i$, so that the bet on $X_i$ pays out $S_i$ dollars (or utiles) if $X_i$ is true and $0$ dollars (or utiles) if $X_i$ is false.</li><li>An agent with probability function $c$ will be prepared to pay $c(X_i)S_i$ for a bet on $X_i$ with stake $S_i$, and thus will be prepared to pay $S \cdot c = c(X_1)S_1 + \ldots + c(X_m)S_m$ dollars (or utiles) for the book of bets with stakes $S = \langle S_1, \ldots, S_m\rangle$. (As is usual in Dutch book-style arguments, we assume that the agent is risk neutral.)</li><li>This is because $S \cdot c$ is the expected pay out of the book of bets with stakes $S$ by the lights of probability function $c$.</li></ul>Now, suppose $c$ is not a linear pool of $c_1, \ldots, c_n$. So $c$ lies outside the convex hull of $\{c_1, \ldots, c_n\}$. Let $c^*$ be the closest point to $c$ inside that convex hull. And let $S = c - c^*$. Then the angle $\theta$ between $S$ and $c_i - c$ is obtuse and thus $\mathrm{cos}\, \theta < 0$ (see diagram below). So, since $S \cdot (c_i - c) = ||S||\, ||c_i - c|| \mathrm{cos}\, \theta$ and $||S||, ||c_i - c|| \geq 0$, we have $S \cdot (c_i - c) < 0$. And hence $S \cdot c_i < S \cdot c$. But recall:<br /><ul><li>$S \cdot c$ is the amount that the aggregate $c$ is prepared to pay for the book of bets with stakes $S$; and </li><li>$S \cdot c_i$ is the expert $i$'s expected pay out of the book of bets with stakes $S$.</li></ul>Thus, each expert will expect that book of bets to pay out less than $c$ will be willing to pay for it.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-Z4J-OXKKzu8/WkqZZPqWmBI/AAAAAAAAApQ/wwuZLqQwtzIUzt17WzSiE5sycbnfaOlFwCLcBGAs/s1600/IMG_3856.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="300" src="https://1.bp.blogspot.com/-Z4J-OXKKzu8/WkqZZPqWmBI/AAAAAAAAApQ/wwuZLqQwtzIUzt17WzSiE5sycbnfaOlFwCLcBGAs/s400/IMG_3856.JPG" width="400" /></a></div><br /><br /><br />Richard Pettigrewhttp://www.blogger.com/profile/07828399117450825734noreply@blogger.com2tag:blogger.com,1999:blog-4987609114415205593.post-78601383681488027912017-10-10T18:47:00.000+01:002017-10-10T18:47:29.584+01:00Two Paradoxes of Belief (by Roy T Cook)This was posted originally at the <a href="https://www.blogger.com/The%20Liar%20paradox%20arises%20via%20considering%20the%20Liar%20sentence:%20%20L:%20L%20is%20not%20true.%20%20and%20then%20reasoning%20in%20accordance%20with%20the:%20%20T-schema:%20%20%E2%80%9C%CE%A6%20is%20true%20if%20and%20only%20if%20what%20%CE%A6%20says%20is%20the%20case.%E2%80%9D%20%20Along%20similar%20lines,%20we%20obtain%20the%20Montague%20paradox%20(or%20the%20%E2%80%9Cparadox%20of%20the%20knower%E2%80%9C)%20by%20considering%20the%20following%20sentence:%20%20M:%20M%20is%20not%20knowable.%20%20and%20then%20reasoning%20in%20accordance%20with%20the%20following%20two%20claims:%20%20Factivity:%20%20%E2%80%9CIf%20%CE%A6%20is%20knowable%20then%20what%20%CE%A6%20says%20is%20the%20case.%E2%80%9D%20%20Necessitation:%20%20%E2%80%9CIf%20%CE%A6%20is%20a%20theorem%20(i.e.%20is%20provable),%20then%20%CE%A6%20is%20knowable.%E2%80%9D%20%20Put%20in%20very%20informal%20terms,%20these%20results%20show%20that%20our%20intuitive%20accounts%20of%20truth%20and%20of%20knowledge%20are%20inconsistent.%20Much%20work%20in%20logic%20has%20been%20carried%20out%20in%20attempting%20to%20formulate%20weaker%20accounts%20of%20truth%20and%20of%20knowledge%20that%20(i)%20are%20strong%20enough%20to%20allow%20these%20notions%20to%20do%20substantial%20work,%20and%20(ii)%20are%20not%20susceptible%20to%20these%20paradoxes%20(and%20related%20paradoxes,%20such%20as%20Curry%20and%20Yablo%20versions%20of%20both%20of%20the%20above).%20A%20bit%20less%20well%20known%20that%20certain%20strong%20but%20not%20altogether%20implausible%20accounts%20of%20idealized%20belief%20also%20lead%20to%20paradox.%20%20The%20puzzles%20involve%20an%20idealized%20notion%20of%20belief%20(perhaps%20better%20paraphrased%20at%20%E2%80%9Crational%20commitment%E2%80%9D%20or%20%E2%80%9Cjustifiable%20belief%E2%80%9D),%20where%20one%20believes%20something%20in%20this%20sense%20if%20and%20only%20if%20(i)%20one%20explicitly%20believes%20it,%20or%20(ii)%20one%20is%20somehow%20committed%20to%20the%20claim%20even%20if%20one%20doesn%E2%80%99t%20actively%20believe%20it.%20Hence,%20on%20this%20understanding%20belief%20is%20closed%20under%20logical%20consequence%20%E2%80%93%20one%20believes%20all%20of%20the%20logical%20consequences%20of%20one%E2%80%99s%20beliefs.%20In%20particular,%20the%20following%20holds:%20%20B-Closure:%20%20%E2%80%9CIf%20you%20believe%20that,%20if%20%CE%A6%20then%20%CE%A8,%20and%20you%20believe%20%CE%A6,%20then%20you%20believe%20%CE%A8.%E2%80%9D%20%20Now,%20for%20such%20an%20idealized%20account%20of%20belief,%20the%20rule%20of%20B-Necessitation:%20%20B-Necessitation:%20%20%E2%80%9CIf%20%CE%A6%20is%20a%20theorem%20(i.e.%20is%20provable),%20then%20%CE%A6%20is%20believed.%E2%80%9D%20%20is%20extremely%20plausible%20%E2%80%93%20after%20all,%20presumably%20anything%20that%20can%20be%20proved%20is%20something%20that%20follows%20from%20things%20we%20believe%20(since%20it%20follows%20from%20nothing%20more%20than%20our%20axioms%20for%20belief).%20In%20addition,%20we%20will%20assume%20that%20our%20beliefs%20are%20consistent:%20%20B-Consistency:%20%20%E2%80%9CIf%20I%20believe%20%CE%A6,%20then%20I%20do%20not%20believe%20that%20%CE%A6%20is%20not%20the%20case.%E2%80%9D%20%20So%20far,%20so%20good.%20But%20neither%20the%20belief%20analogue%20of%20the%20T-schema:%20%20B-schema:%20%20%E2%80%9C%CE%A6%20is%20believed%20if%20and%20only%20if%20what%20%CE%A6%20says%20is%20the%20case.%E2%80%9D%20%20nor%20the%20belief%20analogue%20of%20Factivity:%20%20B-Factivity:%20%20%E2%80%9CIf%20you%20believe%20%CE%A6%20then%20what%20%CE%A6%20says%20is%20the%20case.%E2%80%9D%20%20is%20at%20all%20plausible.%20After%20all,%20just%20because%20we%20believe%20something%20(or%20even%20that%20the%20claim%20in%20question%20follows%20from%20what%20we%20believe,%20in%20some%20sense)%20doesn%E2%80%99t%20mean%20the%20belief%20has%20to%20be%20true!%20%20There%20are%20other,%20weaker,%20principles%20about%20belief,%20however,%20that%20are%20not%20intuitively%20implausible,%20but%20when%20combined%20with%20B-Closure,%20B-Necessitation,%20and%20B-Consistency%20lead%20to%20paradox.%20We%20will%20look%20at%20two%20principles%20%E2%80%93%20each%20of%20which%20captures%20a%20sense%20in%20which%20we%20cannot%20be%20wrong%20about%20what%20we%20think%20we%20don%E2%80%99t%20believe.%20%20The%20first%20such%20principle%20we%20will%20call%20the%20First%20Transparency%20Principle%20for%20Disbelief:%20%20TPDB1:%20%20%E2%80%9CIf%20you%20believe%20that%20you%20don%E2%80%99t%20believe%20%CE%A6%20then%20you%20don%E2%80%99t%20believe%20%CE%A6.%E2%80%9D%20%20In%20other%20words,%20although%20many%20of%20our%20beliefs%20can%20be%20wrong,%20according%20to%20TPDB1%20our%20beliefs%20about%20what%20we%20do%20not%20believe%20cannot%20be%20wrong.%20The%20second%20principle,%20which%20is%20a%20mirror%20image%20of%20the%20first,%20we%20will%20call%20the%20Second%20Transparency%20Principle%20for%20Disbelief:%20%20TPDB2:%20%20%E2%80%9CIf%20you%20don%E2%80%99t%20believe%20%CE%A6%20then%20you%20believe%20that%20you%20don%E2%80%99t%20believe%20%CE%A6.%E2%80%9D%20%20In%20other%20words,%20according%20to%20TPDB2%20we%20are%20aware%20of%20(i.e.%20have%20true%20beliefs%20about)%20all%20of%20the%20facts%20regarding%20what%20we%20don%E2%80%99t%20believe.%20%20Either%20of%20these%20principles,%20combined%20with%20B-Closure,%20B-Necessitation,%20and%20B-Consistency,%20lead%20to%20paradox.%20I%20will%20present%20the%20argument%20for%20TPBD1.%20The%20argument%20for%20TPDB2%20is%20similar,%20and%20left%20to%20the%20reader%20(although%20I%20will%20give%20an%20important%20hint%20below).%20%20Consider%20the%20sentence:%20%20S:%20It%20is%20not%20the%20case%20that%20I%20believe%20S.%20%20Now,%20by%20inspection%20we%20can%20understand%20this%20sentence,%20and%20thus%20conclude%20that:%20%20(1)%20What%20S%20says%20is%20the%20case%20if%20and%20only%20if%20I%20do%20not%20believe%20S.%20%20Further,%20(1)%20is%20something%20we%20can,%20via%20inspecting%20the%20original%20sentence,%20informally%20prove.%20(Or,%20if%20we%20were%20being%20more%20formal,%20and%20doing%20all%20of%20this%20in%20arithmetic%20enriched%20with%20a%20predicate%20%E2%80%9CB(x)%E2%80%9D%20for%20idealized%20belief,%20a%20formal%20version%20of%20the%20above%20would%20be%20a%20theorem%20due%20to%20G%C3%B6del%E2%80%99s%20diagonalization%20lemma.)%20So%20we%20can%20apply%20B-Necessitation%20to%20(1),%20obtaining:%20%20(2)%20I%20believe%20that:%20what%20S%20says%20is%20the%20case%20if%20and%20only%20if%20I%20do%20not%20believe%20S.%20%20Applying%20a%20version%20of%20B-Closure,%20this%20entails:%20%20(3)%20I%20believe%20S%20if%20and%20only%20if%20I%20believe%20that%20I%20do%20not%20believe%20S.%20%20Now,%20assume%20(for%20reductio%20ad%20absurdum)%20that:%20%20(4)%20I%20believe%20S.%20%20Then%20combining%20(3)%20and%20(4)%20and%20some%20basic%20logic,%20we%20obtain:%20%20(5)%20I%20believe%20that%20I%20do%20not%20believe%20S.%20%20Applying%20TPDB1%20to%20(5),%20we%20get:%20%20(6)%20I%20do%20not%20believe%20S.%20%20But%20this%20contradicts%20(4).%20So%20lines%20(4)%20through%20(6)%20amount%20to%20a%20refutation%20of%20line%20(4),%20and%20hence%20a%20proof%20that:%20%20(7)%20I%20do%20not%20believe%20S.%20%20Now,%20(7)%20is%20clearly%20a%20theorem%20(we%20just%20proved%20it),%20so%20we%20can%20apply%20B-Necessitation,%20arriving%20at:%20%20(8)%20I%20believe%20that%20I%20do%20not%20believe%20S.%20%20Combining%20(8)%20and%20(3)%20leads%20us%20to:%20%20(9)%20I%20believe%20S.%20%20But%20this%20obviously%20contradicts%20(7),%20and%20we%20have%20our%20final%20contradiction.%20%20Note%20that%20this%20argument%20does%20not%20actually%20use%20B-Consistency%20(hint%20for%20the%20second%20argument%20involving%20TPDB2:%20you%20will%20need%20B-Consistency!)%20%20These%20paradoxes%20seem%20to%20show%20that,%20as%20a%20matter%20of%20logic,%20we%20cannot%20have%20perfectly%20reliable%20beliefs%20about%20what%20we%20don%E2%80%99t%20believe%20%E2%80%93%20in%20other%20words,%20in%20this%20idealized%20sense%20of%20belief,%20there%20are%20always%20things%20that%20we%20believe%20that%20we%20don%E2%80%99t%20believe,%20but%20in%20actuality%20we%20do%20believe%20(the%20failure%20of%20TPDB1),%20and%20things%20that%20we%20don%E2%80%99t%20believe,%20but%20don%E2%80%99t%20believe%20that%20we%20don%E2%80%99t%20believe%20(the%20failure%20of%20TPDB2).%20At%20least,%20the%20puzzles%20show%20this%20if%20we%20take%20them%20to%20force%20us%20to%20reject%20both%20TPDB1%20and%20TPDB2%20in%20the%20same%20way%20that%20many%20feel%20that%20the%20Liar%20paradox%20forces%20us%20to%20abandon%20the%20full%20T-Schema.%20%20Once%20we%E2%80%99ve%20considered%20transparency%20principles%20for%20disbelief,%20it%E2%80%99s%20natural%20to%20consider%20corresponding%20principles%20for%20belief.%20There%20are%20two.%20The%20first%20is%20the%20First%20Transparency%20Principle%20for%20Belief:%20%20TPB1:%20%20%E2%80%9CIf%20you%20believe%20that%20you%20believe%20%CE%A6%20then%20you%20believe%20%CE%A6.%E2%80%9D%20%20In%20other%20words,%20according%20to%20TPD1%20our%20beliefs%20about%20what%20we%20believe%20cannot%20be%20wrong.%20The%20second%20principle,%20again%20is%20a%20mirror%20image%20of%20the%20first,%20is%20the%20Second%20Transparency%20Principle%20for%20Belief:%20%20TPB2:%20%20%E2%80%9CIf%20you%20believe%20%CE%A6%20then%20you%20believe%20that%20you%20believe%20%CE%A6.%E2%80%9D%20%20In%20other%20words,%20according%20to%20TPB2%20we%20are%20aware%20of%20all%20of%20the%20facts%20regarding%20what%20we%20believe.%20%20Are%20either%20of%20these%20two%20principles,%20combined%20with%20B-Closure,%20B-Necessitation,%20and%20B-Consistency,%20paradoxical?%20If%20not,%20are%20there%20additional,%20plausible%20principles%20that%20would%20lead%20to%20paradoxes%20if%20added%20to%20these%20claims?%20I%E2%80%99ll%20leave%20it%20to%20the%20reader%20to%20explore%20these%20questions%20further.%20%20A%20historical%20note:%20Like%20so%20many%20other%20cool%20puzzles%20and%20paradoxes,%20versions%20of%20some%20of%20these%20puzzles%20first%20appeared%20in%20the%20work%20of%20medieval%20logician%20Jean%20Buridan.">OUPBlog</a>. This is a first in a series of cross-posted blogs by <a href="https://cla.umn.edu/about/directory/profile/cookx432" target="_blank">Roy T Cook</a> (Minnesota) from the OUPBlog series on <a href="https://blog.oup.com/category/series-columns/paradoxes-puzzles-roy-cook/" target="_blank">Paradox and Puzzles</a>.<br /><br />The Liar paradox arises via considering the Liar sentence:<br /><br />L: L is not true.<br /><br />and then reasoning in accordance with the:<br /><br />T-schema:<br /><br />“Φ is true if and only if what Φ says is the case.”<br /><br />Along similar lines, we obtain the Montague paradox (or the “paradox of the knower“) by considering the following sentence:<br /><br />M: M is not knowable.<br /><br />and then reasoning in accordance with the following two claims:<br /><br />Factivity:<br /><br />“If Φ is knowable then what Φ says is the case.”<br /><br />Necessitation:<br /><br />“If Φ is a theorem (i.e. is provable), then Φ is knowable.”<br /><br />Put in very informal terms, these results show that our intuitive accounts of truth and of knowledge are inconsistent. Much work in logic has been carried out in attempting to formulate weaker accounts of truth and of knowledge that (i) are strong enough to allow these notions to do substantial work, and (ii) are not susceptible to these paradoxes (and related paradoxes, such as Curry and Yablo versions of both of the above). A bit less well known that certain strong but not altogether implausible accounts of idealized belief also lead to paradox.<br /><br />The puzzles involve an idealized notion of belief (perhaps better paraphrased at “rational commitment” or “justifiable belief”), where one believes something in this sense if and only if (i) one explicitly believes it, or (ii) one is somehow committed to the claim even if one doesn’t actively believe it. Hence, on this understanding belief is closed under logical consequence – one believes all of the logical consequences of one’s beliefs. In particular, the following holds:<br /><br />B-Closure:<br /><br />“If you believe that, if Φ then Ψ, and you believe Φ, then you believe Ψ.”<br /><br />Now, for such an idealized account of belief, the rule of B-Necessitation:<br /><br />B-Necessitation:<br /><br />“If Φ is a theorem (i.e. is provable), then Φ is believed.”<br /><br />is extremely plausible – after all, presumably anything that can be proved is something that follows from things we believe (since it follows from nothing more than our axioms for belief). In addition, we will assume that our beliefs are consistent:<br /><br />B-Consistency:<br /><br />“If I believe Φ, then I do not believe that Φ is not the case.”<br /><br />So far, so good. But neither the belief analogue of the T-schema:<br /><br />B-schema:<br /><br />“Φ is believed if and only if what Φ says is the case.”<br /><br />nor the belief analogue of Factivity:<br /><br />B-Factivity:<br /><br />“If you believe Φ then what Φ says is the case.”<br /><br />is at all plausible. After all, just because we believe something (or even that the claim in question follows from what we believe, in some sense) doesn’t mean the belief has to be true!<br /><br />There are other, weaker, principles about belief, however, that are not intuitively implausible, but when combined with B-Closure, B-Necessitation, and B-Consistency lead to paradox. We will look at two principles – each of which captures a sense in which we cannot be wrong about what we think we don’t believe.<br /><br />The first such principle we will call the First Transparency Principle for Disbelief:<br /><br />TPDB1:<br /><br />“If you believe that you don’t believe Φ then you don’t believe Φ.”<br /><br />In other words, although many of our beliefs can be wrong, according to TPDB1 our beliefs about what we do not believe cannot be wrong. The second principle, which is a mirror image of the first, we will call the Second Transparency Principle for Disbelief:<br /><br />TPDB2:<br /><br />“If you don’t believe Φ then you believe that you don’t believe Φ.”<br /><br />In other words, according to TPDB2 we are aware of (i.e. have true beliefs about) all of the facts regarding what we don’t believe.<br /><br />Either of these principles, combined with B-Closure, B-Necessitation, and B-Consistency, lead to paradox. I will present the argument for TPBD1. The argument for TPDB2 is similar, and left to the reader (although I will give an important hint below).<br /><br />Consider the sentence:<br /><br />S: It is not the case that I believe S.<br /><br />Now, by inspection we can understand this sentence, and thus conclude that:<br /><br />(1) What S says is the case if and only if I do not believe S.<br /><br />Further, (1) is something we can, via inspecting the original sentence, informally prove. (Or, if we were being more formal, and doing all of this in arithmetic enriched with a predicate “B(x)” for idealized belief, a formal version of the above would be a theorem due to Gödel’s diagonalization lemma.) So we can apply B-Necessitation to (1), obtaining:<br /><br />(2) I believe that: what S says is the case if and only if I do not believe S.<br /><br />Applying a version of B-Closure, this entails:<br /><br />(3) I believe S if and only if I believe that I do not believe S.<br /><br />Now, assume (for reductio ad absurdum) that:<br /><br />(4) I believe S.<br /><br />Then combining (3) and (4) and some basic logic, we obtain:<br /><br />(5) I believe that I do not believe S.<br /><br />Applying TPDB1 to (5), we get:<br /><br />(6) I do not believe S.<br /><br />But this contradicts (4). So lines (4) through (6) amount to a refutation of line (4), and hence a proof that:<br /><br />(7) I do not believe S.<br /><br />Now, (7) is clearly a theorem (we just proved it), so we can apply B-Necessitation, arriving at:<br /><br />(8) I believe that I do not believe S.<br /><br />Combining (8) and (3) leads us to:<br /><br />(9) I believe S.<br /><br />But this obviously contradicts (7), and we have our final contradiction.<br /><br />Note that this argument does not actually use B-Consistency (hint for the second argument involving TPDB2: you will need B-Consistency!)<br /><br />These paradoxes seem to show that, as a matter of logic, we cannot have perfectly reliable beliefs about what we don’t believe – in other words, in this idealized sense of belief, there are always things that we believe that we don’t believe, but in actuality we do believe (the failure of TPDB1), and things that we don’t believe, but don’t believe that we don’t believe (the failure of TPDB2). At least, the puzzles show this if we take them to force us to reject both TPDB1 and TPDB2 in the same way that many feel that the Liar paradox forces us to abandon the full T-Schema.<br /><br />Once we’ve considered transparency principles for disbelief, it’s natural to consider corresponding principles for belief. There are two. The first is the First Transparency Principle for Belief:<br /><br />TPB1:<br /><br />“If you believe that you believe Φ then you believe Φ.”<br /><br />In other words, according to TPD1 our beliefs about what we believe cannot be wrong. The second principle, again is a mirror image of the first, is the Second Transparency Principle for Belief:<br /><br />TPB2:<br /><br />“If you believe Φ then you believe that you believe Φ.”<br /><br />In other words, according to TPB2 we are aware of all of the facts regarding what we believe.<br /><br />Are either of these two principles, combined with B-Closure, B-Necessitation, and B-Consistency, paradoxical? If not, are there additional, plausible principles that would lead to paradoxes if added to these claims? I’ll leave it to the reader to explore these questions further.<br /><br />A historical note: Like so many other cool puzzles and paradoxes, versions of some of these puzzles first appeared in the work of medieval logician Jean Buridan.Richard Pettigrewhttp://www.blogger.com/profile/07828399117450825734noreply@blogger.com2tag:blogger.com,1999:blog-4987609114415205593.post-8893471410512231962017-09-10T09:49:00.001+01:002017-09-10T09:49:19.900+01:00Aggregating abstaining expertsIn a series of posts a few months ago (<a href="http://m-phi.blogspot.co.uk/2017/03/a-dilemma-for-judgment-aggregation.html" target="_blank">here</a>, <a href="http://m-phi.blogspot.co.uk/2017/03/a-little-more-on-aggregating-incoherent.html" target="_blank">here</a>, and <a href="http://m-phi.blogspot.co.uk/2017/03/aggregating-incoherent-credences-case.html" target="_blank">here</a>), I explored a particular method by which we might aggregate expert credences when those credences are incoherent. The result was this <a href="https://drive.google.com/file/d/0B-Gzj6gcSXKrSTRZRGNxOUdIR3M/view?usp=sharing" target="_blank">paper</a>, which is now forthcoming in <i>Synthese</i>. The method in question was called <i>the coherent approximation principle</i> (CAP), and it was introduced by Daniel Osherson and Moshe Vardi in <a href="https://www.cs.rice.edu/~vardi/papers/geb06.pdf" target="_blank">this</a> 2006 paper. CAP is based on what we might call <i>the principle of minimal mutilation</i>. We begin with a collection of credence functions, $c_1$, ..., $c_n$, one for each expert, and some of which might be incoherent. What we want at the end is a single coherent credence function $c$ that is the aggregate of $c_1$, ..., $c_n$. The principle of minimal mutilation says that $c$ should be as close as possible to the $c_i$s -- when aggregating a collection of credence functions, you should change them as little as possible to obtain your aggregate.<br /><br />We can spell this out more precisely by introducing a <i>divergence</i> $\mathfrak{D}$. We might think of this as a measure of how far one credence function lies from another. Thus, $\mathfrak{D}(c, c')$ measures the distance from $c$ to $c'$. We call these measures <i>divergences</i> rather than <i>distances</i> or <i>metrics</i>, since they do not have the usual features that mathematicians assume of a metric: we assume $\mathfrak{D}(c, c') \geq 0$, for any $c, c'$, and $\mathfrak{D}(c, c') = 0$ iff $c = c'$, but we do not assume that $\mathfrak{D}$ is symmetric nor that it satisfies the triangle inequality. In particular, we assume that $\mathfrak{D}$ is an <i>additive Bregman divergence</i>. The standard example of an additive Bregman divergence is <i>squared Euclidean distance</i>: if $c$, $c'$ are both defined on the set of propositions $F$, then<br />$$<br />\mathrm{SED}(c, c') = \sum_{X \in F} |c(X) - c'(X)|^2<br />$$In fact, $\mathrm{SED}$ is symmetric, but it does not satisfy the triangle inequality. The details of this family of divergences needn't detain us here (but see here and here for more). Indeed, we will simply use $\mathrm{SED}$ throughout. But a more general treatment would look at other additive Bregman divergences, and I hope to do this soon.<br /><br />Now, suppose $c_1$, ..., $c_n$ is a set of expert credence functions. And suppose $c_i$ is defined on the set of propositions $F_i$. And suppose that $\mathfrak{D}$ is an additive Bregman divergence -- you might take it to be $\mathrm{SED}$. Then how do we define the aggregate $c$ that is obtained from $c_1$, ..., $c_n$ by a minimal mutilation? We let $c$ be the coherent credence function such that the sum of the distances from $c$ to the $c_i$s is minimal. That is,<br />$$<br />\mathrm{CAP}_{\mathfrak{D}}(c_1, \ldots, c_n) = \mathrm{arg\ min}_{c \in P_{F_i}} \sum^n_{i=1} \mathfrak{D}(c, c_i)<br />$$<br />where $P_{F_i}$ is the set of coherent credence functions over $F_i$.<br /><br />As we see in my paper linked above, if each of the credence functions are defined over the same set of propositions -- that is, if $F_i = F_j$, for all $1 \leq i, j, \leq n$ -- then:<br /><ul><li>if $\mathfrak{D}$ is squared Euclidean distance, then this aggregate is the <i>straight linear pool</i> of the original credences; if $c$ is defined on the partition $X_1$, ..., $X_m$, then the straight linear pool of $c_1$, ..., $c_n$ is this:$$c(X_j) = \frac{1}{n}c_1(X_j) + ... + \frac{1}{n}c_n(X_j)$$</li><li>if $\mathfrak{D}$ is the generalized Kullback-Leibler divergence, then the aggregate is the <i>straight geometric pool</i> of the originals; if $c$ is defined on the partition $X_1$, ..., $X_m$, then the straight geometric pool of $c_1$, ..., $c_n$ is this: $$c(X_j) = \frac{1}{K}(c_1(X_j)^{\frac{1}{n}} \times ... \times c_1(X_j)^{\frac{1}{n}})$$where $K$ is a normalizing factor.</li></ul>(For more on these types of aggregation, see <a href="http://personal.lse.ac.uk/list/PDF-files/OpinionPoolingReview.pdf" target="_blank">here</a> and <a href="https://link.springer.com/article/10.1007/s11098-014-0350-8" target="_blank">here</a>).<br /><br />In this post, I'm interested in cases where our agents have credences in different sets of propositions. For instance, the first agent has credences concerning the rainfall in Bristol tomorrow and the rainfall in Bath, but the second has credences concerning the rainfall in Bristol and the rainfall in Birmingham.<br /><br />I want to begin by pointing to a shortcoming of CAP when it is applied to such cases. It fails to satisfy what we might think of as a basic desideratum of such procedures. To illustrate this desideratum, let's suppose that the three propositions $X_1$, $X_2$, and $X_3$ form a partition. And suppose that Amira has credences in $X_1$, $X_2$, and $X_3$, while Benito has credences only in $X_1$ and $X_2$. In particular:<br /><ul><li>Amira's credence function is: $c_A(X_1) = 0.3$, $c_A(X_2) = 0.6$, $c_A(X_3) = 0.1$.</li><li>Benito's credence function is: $c_B(X_1) = 0.2$, $c_B(X_2) = 0.6$.</li></ul>Now, notice that, while Amira's credence function is defined on the whole partition, Benito's is not. But, nonetheless, Benito's credences uniquely determine a coherent credence function on the whole partition:<br /><ul><li>Benito's extended credence function is: $c^*_B(X_1) = 0.2$, $c^*_B(X_2) = 0.6$, $c^*_B(X_3) = 0.2$.</li></ul>Thus, we might expect our aggregation procedure to give the same result whether we aggregate Amira's credence function with Benito's or with Benito's extended credence function. That is, we might expect the same result whether we aggregate $c_A$ with $c_B$ or with $c^*_B$. After all, $c^*_B$ is in some sense implicit in $c_B$. An agent with credence function $c_B$ is committed to the credences assigned by credence function $c^*_B$.<br /><br />However, CAP does not do this. As mentioned above, if you aggregate $c_A$ and $c^*_B$ using $\mathrm{SED}$, then the result is their linear pool: $\frac{1}{2}c_A + \frac{1}{2}c^*_B$. Thus, the aggregate credence in $X_1$ is $0.25$; in $X_2$ it is $0.6$; and in $X_3$ it is $0.15$. The result is different if you aggregate $c_A$ and $c_B$ using $SED$: the aggregate credence in $X_1$ is $0.2625$; in $X_2$ it is $0.6125$; in $X_3$ it is $0.125$.<br /><br />Now, it is natural to think that the problem arises here because Amira's credences are getting too much say in how far a potential aggregate lies from the agents, since she has credences in three propositions, while Benito only has credences in two. And, sure enough, $\mathrm{CAP}_{\mathrm{SED}}(c_A, c_B)$ lies closer to $c_A$ than to $c_B$ and closer to $c_A$ than the aggregate of $c_A$ and $c^*_B$ lies. And it is equally natural to try to solve this potential bias in favour of the agent with more credences by normalising. That is, we might define a new version of CAP:<br />$$<br />\mathrm{CAP}^+_D(c_1, \ldots, c_n) = \mathrm{arg\ min}_{c' \in P_{F_i}} \sum^n_{i=1} \frac{1}{|F_i|}D(c, c_i)<br />$$<br />However, this doesn't help. Using this definition, the aggregate of Amira's credence function $c_A$ and Benito's extended credence function $c^*_B$ remains the same; but the aggregate of Amira's credence function and Benito's original credence function changes -- the aggregate credence in $X_1$ is $0.25333$; in $X_2$, it is $0.61333$; in $X_3$, it is $0.1333$. Again, the two ways of aggregating disagree.<br /><br />So here is our desideratum in general:<br /><br /><b>Agreement with Coherent Commitments (ACC)</b> Suppose $c_1$, ..., $c_n$ are coherent credence functions, with $c_i$ defined on $F_i$, for each $1 \leq i \leq n$. And let $F = \bigcup^n_{i=1} F_i$. Now suppose that, for each $c_i$ defined on $F_i$, there is a unique coherent credence function $c^*_i$ defined on $F$ that extends $c_i$ -- that is, $c_i(X) = c^*_i(X)$ for all $X$ in $F_i$. Then the aggregate of $c_1$, ..., $c_n$ should be the same as the aggregate of $c^*_1$, ..., $c^*_n$.<br /><br />CAP does not satisfy ACC. Is there a natural aggregation rule that does? Here's a suggestion. Suppose you wish to aggregate a set of credence functions $c_1$, ..., $c_n$, where $c_i$ is defined on $F_i$, as above. Then we proceed as follows.<br /><ol><li>First, let $F = \bigcup^n_{i=1} F_i$.</li><li>Second, for each $1 \leq i \leq n$, let $$c^*_i = \{c : \mbox{$c$ is coherent & $c$ is defined on $F$ & $c(X) = c_i(X)$ for all $X$ in $F$}\}$$ That is, while $c_i$ represents a precise credal state defined on $F_i$, $c^*_i$ represents an imprecise credal state defined on $F$. It is the set of coherent credence functions on $F$ that extend $c_i$. That is, it is the set of coherent credence functions on $F$ that agree with $c_i$ on propositions in $F_i$. Thus, if, like Benito, your coherent credences on $F_i$ uniquely determine your coherent credences on $F$, then $c^*_i$ is just the singleton that contains that unique extension. But if your credences over $F_i$ do not uniquely determine your coherent credences over $F$, then $c^*_i$ will contain more coherent credence functions.</li><li>Finally, we take the aggregate of $c_1$, ..., $c_n$ to be the credence function $c$ that minimizes the total distance from $c$ to the $c^*_i$s. The problem is that there isn't a single natural definition of the distance from a point to a set of points, even when you have a definition of the distance between individual points. I adopt a very particular measure of such distances here; but it would be interesting to explore the alternative options in greater detail elsewhere. Suppose $c$ is a credence function and $C$ is a set of credence functions. Then $$D(c, C) = \frac{\mathrm{min}_{c' \in C}D(c, c') + \mathrm{max}_{c' \in C}D(c, c')}{2}$$ With this in hand, we can finally give our aggregation procedure:$$\mathrm{CAP}^*_D(c_1, \ldots, c_n) = \mathrm{arg\ min}_{c' \in P_F} \sum^n_{i=1} D(c, c^*_i)$$ </li></ol>The first thing to note about CAP$^*$ is that, unlike the original CAP, or CAP$^+$, it automatically satisfies ACC.<br /><br />Let's now see CAP$^*$ in action.<br /><ul><li>Since CAP$^*$ satisfies ACC, the aggregate for $c_A$ and $c_B$ is the same as the aggregate for $c_A$ and $c^*_B$, which is just their straight linear pool.</li><li>Next, suppose we wish to aggregate Amira with a third agent, Cleo, who has a credence only in $X_1$, which she assigns $0.5$ -- that is, $c_C(X_1) = 0.5$. Then $F = \{X_1, X_2, X_3\}$, and $$c^*_C = \{c : c(X_1) = 0.5, c(X_2) \geq 0.5, c(X_3) = 1 - c(X_1) - c(X_2)\}$$ So, $$\mathrm{CAP}^*_{\mathfrak{D}}(c_A, c_B) = \mathrm{arg\ min}_{c' \in P_F} \mathfrak{D}(c', c_A) + \mathfrak{D}(c', c^*_C)$$Working through the calculation for $\mathfrak{D} = \mathrm{SED}$, we obtain the following aggregate: $c(X_1) = 0.4$, $c(X_2) = 0.425$, $c(X_3) = 0.175$.</li><li>One interesting feature of CAP$^*$ is that, unlike CAP, we can apply it to individual agents. Thus, for instance, suppose we wish to take Cleo's single credence in $X_1$ and 'fill in' her credences in $X_2$ and $X_3$. Then we can use CAP$^*$ to do this. Her new credence function will be $$c'_C = \mathrm{CAP}^*_{\mathrm{SED}}(c_C) = \mathrm{arg\ min}_{c' \in P_F} D(c', c_C)$$ That is, $c'_C(X_1) = 0.5$, $c'_C(X_2) = 0.25$, $c'_C(X_3) = 0.25$. Rather unsurprisingly, $c'_C$ is the midpoint of the line formed by the imprecise probabilities $c^*_C$. Now, notice: the aggregate of Amira and Cleo given above is just the straight linear pool of Amira's credence function $c_A$ and Cleo's 'filled in' credence function $c'_C$. I would conjecture that this is generally true: filling in credences using CAP$^*_{\mathrm{SED}}$ and then aggregating using straight linear pooling always agrees with aggregating using CAP$^*_{\mathrm{SED}}$. And perhaps this generalises beyond SED.</li></ul>Richard Pettigrewhttp://www.blogger.com/profile/07828399117450825734noreply@blogger.com1tag:blogger.com,1999:blog-4987609114415205593.post-63402577115091454602017-09-01T05:56:00.004+01:002017-09-01T05:56:52.697+01:00Two PhD positions in probability & law in Gdansk<div dir="ltr" style="text-align: left;" trbidi="on">More details <a href="http://entiaetnomina.blogspot.jp/2017/09/two-phd-positions-in-probability-law.html" target="_blank">here.</a></div>Rafal Urbaniakhttp://www.blogger.com/profile/10277466578023939272noreply@blogger.com0tag:blogger.com,1999:blog-4987609114415205593.post-28586265974733149402017-07-31T16:49:00.000+01:002017-07-31T16:49:01.444+01:00Logic in the wild CFP (Ghent, 9-10 Nov 2017)<div dir="ltr" style="text-align: left;" trbidi="on"><div class="p1"><span class="s1"><b>CALL FOR PAPERS</b></span></div><div class="p1"><span class="s1"><b>workshop on </b></span></div><div class="p1"><span class="s1"><b>LOGIC IN THE WILD<span class="Apple-converted-space"> </span></b></span></div><div class="p1"><span class="s1"><b>Ghent University, 9 & 10 November 2017. </b></span></div><div class="p2"><span class="s1"></span><br /></div><div class="p3"><br /></div><div class="p3"><span class="s1"><b>The scope of this workshop</b></span></div><div class="p4" style="text-align: justify;"><span class="s1">Nowadays we are witnessing a ‘practical’, or cognitive turn in logic. The approach draws on enormous achievements of a legion of formal and mathematical logicians, but focuses on ‘the Wild’: actual human processes of reasoning and argumentation. Moreover, high standards of inquiry that we owe to formal logicians offer a new quality in research on reasoning and argumentation. In terms of John Corcoran’s distinction between logic as formal ontology and logic as formal epistemology, the aim of the practical turn is to make formal epistemology even more epistemically oriented. This is not to say that this ‘practically turned’ (or cognitively oriented) logic becomes just a part of psychology. This is to say that this logic acquires a new task of “systematically keeping track of changing representations of information”, as Johan van Benthem puts it, and that it contests the claim that the distinction between descriptive and normative accounts of reasoning is disjoint and exhaustive. From a different than purely psychological perspective logic becomes -- again -- interested in answering Dewey’s question about the Wild: how do we think? This is the new alluring face of psychologism, or cognitivism, in logic, as opposed to the old one, which Frege and Husserl fought against. This is the area of research to which our workshop is devoted.</span></div><div class="p3"><span class="s1">For this workshop we invite submissions on:</span></div><div class="p3"><span class="s1">- applications of logic to the analysis of actual human reasoning and argumentation processes.</span></div><div class="p3"><span class="s1">- tools and methods suited for such applications.</span></div><div class="p3"><span class="s1">- neural basis of logical reasoning.</span></div><div class="p3"><span class="s1">- educational issues of cognitively-oriented logic.</span></div><div class="p5"><span class="s1"></span><br /></div><div class="p3"><span class="s1"><b>Keynote speakers</b></span></div><div class="p6"><span class="s1">Keith Stenning (University of Edinburgh)</span></div><div class="p6"><span class="s1">Iris van Rooij (Radboud University Nijmegen)</span></div><div class="p3"><span class="s1">Christian Strasser (Ruhr University Bochum)</span></div><div class="p3"><br /></div><div class="p3"><span class="s1"><b>How to submit an abstract</b></span></div><div class="p3"><span class="s1">We welcome submissions on any topic that fits into the scope as described above. Send your abstract of 300 to 500 words to: <a href="mailto:lrr@ugent.be"><span class="s2">lrr@ugent.be</span></a> before <b>10 September 2017</b>.</span></div><div class="p3"><span class="s1">Notification of acceptance: 22 September 2017.</span></div><div class="p3"><br /></div><div class="p3"><span class="s1"><b>Website</b></span></div><div class="p3"><span class="s1">More information about the workshop (venue, registration, …) is available at</span></div><div class="p3"><span class="s2"><a href="http://www.lrr.ugent.be/logic-in-the-wild/">http://www.lrr.ugent.be/logic-in-the-wild/</a></span><span class="s1">. The programme will be available there in October.</span></div><div class="p3"><br /></div><div class="p3"><span class="s1"><b>Background</b></span></div><div class="p3"><span class="s1">This workshop is organized by the scientific research network <i>Logical and Methodological Analysis of Scientific Reasoning Processes</i> (LMASRP) which is sponsored by the Research Foundation Flanders (FWO).</span></div><div class="p3"><span class="s1">All information about the network can be found at <a href="http://www.lmasrp.ugent.be/"><span class="s2">http://www.lmasrp.ugent.be/</span></a></span></div><style type="text/css">p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; text-align: center; font: 12.0px 'Times New Roman'; color: #212121; -webkit-text-stroke: #212121} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; text-align: center; font: 12.0px 'Times New Roman'; color: #212121; -webkit-text-stroke: #212121; min-height: 15.0px} p.p3 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Times New Roman'; color: #212121; -webkit-text-stroke: #212121} p.p4 {margin: 0.0px 0.0px 0.0px 0.0px; text-align: justify; font: 12.0px 'Times New Roman'; color: #212121; -webkit-text-stroke: #212121} p.p5 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Times New Roman'; color: #212121; -webkit-text-stroke: #212121; min-height: 15.0px} p.p6 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Times New Roman'; -webkit-text-stroke: #000000} span.s1 {font-kerning: none} span.s2 {text-decoration: underline ; font-kerning: none; color: #4787ff; -webkit-text-stroke: 0px #4787ff} </style> <br /><div class="p3"><span class="s1">An overview of the previous workshops of the network can be found at <a href="http://www.lrr.ugent.be/"><span class="s2">http://www.lrr.ugent.be/</span></a>.</span></div></div>Rafal Urbaniakhttp://www.blogger.com/profile/10277466578023939272noreply@blogger.com0tag:blogger.com,1999:blog-4987609114415205593.post-65811373758714643312017-07-02T00:16:00.000+01:002017-07-02T00:16:22.600+01:00Three Postdoctoral Fellowships at the MCMP (LMU Munich)The Munich Center for Mathematical Philosophy (MCMP) seeks applications for <span class="dunkelrot">three 3-year postdoctoral fellowships</span> starting on <strong>October 1, 2017</strong>. (A later starting date is possible.) We are especially interested in candidates who work in the field of mathematical philosophy with a focus on philosophical logic (broadly construed, including philosophy and foundations of mathematics, semantics, formal philosophy of language, inductive logic and foundations of probability, and more).<br /><br /> Candidates who have not finished their PhD at the time of the application deadline have to provide evidence that they will have their PhD in hand at the time the fellowship starts. Applications (including a cover letter that addresses, amongst others, one's academic background, research interests and the proposed starting date, a CV, a list of publications, a sample of written work of no more than 5000 words, and a description of a planned research project of about 1000 words) should be sent by email (in one PDF document) to <a class="g-link-mail" href="mailto:office.leitgeb@lrz.uni-muenchen.de" title="Send email to: office.leitgeb@lrz.uni-muenchen.de">office.leitgeb@lrz.uni-muenchen.de</a> by <strong>August 15, 2017</strong>. Hard copy applications are not accepted. Additionally, two confidential letters of reference addressing the applicant's qualifications for academic research should be sent to the same email address from the referees directly.<br /><br /> The MCMP hosts a vibrant research community of faculty, postdoctoral fellows, doctoral fellows, master students, and visiting fellows. It organizes at least two weekly colloquia and a weekly internal work-in-progress seminar, as well as various other activities such as workshops, conferences, summer schools, and reading groups. The successful candidates will partake in the MCMP's academic activities and enjoy its administrative facilities and support. The official language at the MCMP is English and fluency in German is not mandatory.<br /><br /> We especially encourage female scholars to apply. The LMU in general, and the MCMP in particular, endeavor to raise the percentage of women among its academic personnel. Furthermore, given equal qualification, preference will be given to candidates with disabilities.<br /><br /> The fellowships are remunerated with 1.853 €/month (paid out without deductions for tax and social security). The MCMP is able to support fellows concerning expenses for professional traveling.<br /><br /> For further information, please contact <a href="http://www.mcmp.philosophie.uni-muenchen.de/people/faculty/hannes_leitgeb/index.html" title="Leitgeb, Hannes">Prof. Hannes Leitgeb</a> (<a class="g-link-mail" href="mailto:H.Leitgeb@lmu.de" title="Send email to: H.Leitgeb@lmu.de">H.Leitgeb@lmu.de</a>).<br /><br /><br /> Vincenzo Crupihttp://www.blogger.com/profile/08069145846190162517noreply@blogger.com0tag:blogger.com,1999:blog-4987609114415205593.post-86370269246635121322017-07-02T00:13:00.001+01:002017-07-02T00:13:43.562+01:00Three Doctoral Fellowships at the MCMP (LMU Munich)The Munich Center for Mathematical Philosophy (MCMP) seeks applications for <span class="dunkelrot">three 3-year doctoral fellowships</span> starting on <strong>October 1, 2017</strong>. (A later starting date is possible.) We are especially interested in candidates who work in the field of mathematical philosophy with a focus on philosophical logic (broadly construed, including philosophy and foundations of mathematics, semantics, formal philosophy of language, inductive logic and foundations of probability, and more).<br /><br /> Candidates who have not finished their MA at the time of the application deadline have to provide evidence that they will have their MA in hand at the time the fellowship starts. Applications (including a cover letter that addresses, amongst others, one's academic background, research interests and the proposed starting date, a CV, a list of publications (if applicable), a sample of written work of no more than 3000 words, and a description of the planned PhD-project of about 2000 words) should be sent by email (in one PDF document) to <a class="g-link-mail" href="mailto:office.leitgeb@lrz.uni-muenchen.de" title="Send email to: office.leitgeb@lrz.uni-muenchen.de">office.leitgeb@lrz.uni-muenchen.de</a> by <strong>August 15, 2017</strong>. Hard copy applications are not accepted. Additionally, one confidential letter of reference addressing the applicant's qualifications for academic research should be sent to the same email address from the referees directly.<br /><br /> The MCMP hosts a vibrant research community of faculty, postdoctoral fellows, doctoral fellows, master students, and visiting fellows. It organizes at least two weekly colloquia and a weekly internal work-in-progress seminar, as well as various other activities such as workshops, conferences, summer schools, and reading groups. The successful candidates will partake in the MCMP's academic activities and enjoy its administrative facilities and support. The official language at the MCMP is English and fluency in German is not mandatory.<br /><br /> We especially encourage female scholars to apply. The LMU in general, and the MCMP in particular, endeavor to raise the percentage of women among its academic personnel. Furthermore, given equal qualification, preference will be given to candidates with disabilities.<br /><br /> The fellowships are remunerated with 1.468 €/month (paid out without deductions for tax and social security). The MCMP is able to support fellows concerning expenses for professional traveling.<br /><br /> For further information, please contact <a href="http://www.mcmp.philosophie.uni-muenchen.de/people/faculty/hannes_leitgeb/index.html" target="_blank" title="Leitgeb, Hannes">Prof. Hannes Leitgeb</a> (<a class="g-link-mail" href="mailto:H.Leitgeb@lmu.de" title="Send email to: H.Leitgeb@lmu.de">H.Leitgeb@lmu.de</a>).<br /><br /><br />Vincenzo Crupihttp://www.blogger.com/profile/08069145846190162517noreply@blogger.com1tag:blogger.com,1999:blog-4987609114415205593.post-69091727360893763872017-05-16T12:06:00.000+01:002017-05-17T15:24:56.751+01:00The Wisdom of the Crowds: generalizing the Diversity Prediction TheoremI've just been reading <a href="http://aidanlyon.com/" target="_blank">Aidan Lyon</a>'s fascinating paper, <a href="http://aidanlyon.com/media/publications/WoCC.pdf" target="_blank">Collective Wisdom</a>. In it, he mentions a result known as the <i>Diversity Prediction Theorem</i>, which is sometimes taken to explain why crowds are wiser, on average, than the individuals who compose them. The theorem was originally proved by <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.37.8876" target="_blank">Anders Krogh and Jesper Vedelsby</a>, but it has entered the literature on social epistemology through the work of <a href="http://press.princeton.edu/titles/8757.html" target="_blank">Scott E. Page</a>. In this post, I'll generalize this result.<br /><br />The Diversity Prediction Theorem concerns a situation in which a number of different individuals estimate a particular quantity -- in the original example, it is the weight of an ox at a local fair. Take the crowd's estimate of the quantity to be the average of the individual estimates. Then the theorem shows that the distance from the crowd's estimate to the true value is less than the average distance from the individual estimates to the true value; and, moreover, the difference between the two is always given by the average distance from the individual estimates to the crowd's estimate (which you might think of as the variance of the individual estimates).<br /><br />Let's make this precise. Suppose you have a group of $n$ individuals. They each provide an estimate for a real-valued quantity. The $i^\mathrm{th}$ individual gives the prediction $q_i$. The true value of this quantity is $\tau$. And we measure the distance from one estimate of a quantity to another, or to the true value of that quantity, using squared error. Then:<br /><ul><li>The crowd's prediction of the quantity is $c = \frac{1}{n}\sum^n_{i=1} q_i$.</li><li>The crowd's distance from the true quantity is $\mathrm{SqE}(c) = (c-\tau)^2$.</li><li>$S_i$'s distance from the true quantity is $\mathrm{SqE}(q_i) = (q_i-\tau)^2$</li><li>The average individual distance from the true quantity is $\frac{1}{n} \sum^n_{i=1} \mathrm{SqE}(q_i) = \frac{1}{n} \sum^n_{i=1} (q_i - \tau)^2$.</li><li>The average individual distance from the crowd's estimate is $v = \frac{1}{n}\sum^n_{i=1} (q_i - c)^2$.</li></ul>Given this, we have:<br /><br /><b>Diversity Prediction Theorem</b> $$\mathrm{SqE}(c) = \frac{1}{n} \sum^n_{i=1} \mathrm{SqE}(q_i) - v$$ <br />The theorem is easy enough to prove. You essentially just follow the algebra. However, following through the proof, you might be forgiven for thinking that the result says more about some quirk of squared error as a measure of distance than about the wisdom of crowds. And of course squared error is just one way of measuring the distance from an estimate of a quantity to the true value of that quantity, or from one estimate of a quantity to another. There are other such distance measures. So the question arises: Does the Diversity Prediction Theorem hold if we replace squared error with one of these alternative measures of distance? In particular, it is natural to take any of the so-called Bregman divergences $\mathfrak{d}$ to be a legitimate measure of distance from one estimate to another. I won't say much about Bregman divergences here, except to give their formal definition. To learn about their properties, have a look <a href="http://mark.reid.name/blog/meet-the-bregman-divergences.html" target="_blank">here</a> and <a href="https://en.wikipedia.org/wiki/Bregman_divergence" target="_blank">here</a>. They were introduced by Bregman as a natural generalization of squared error.<br /><br /><b>Definition (Bregman divergence)</b> A function $\mathfrak{d} : [0, \infty) \times [0, \infty) \rightarrow [0, \infty]$ is a <i>Bregman divergence </i>if there is a continuously differentiable, strictly convex function $\varphi : [0, \infty) \rightarrow [0, \infty)$ such that $$\mathfrak{d}(x, y) = \varphi(x) - \varphi(y) - \varphi'(y)(x-y)$$<br />Squared error is itself one of the Bregman divergences. It is the one generated by $\varphi(x) = x^2$. But there are many others, each generated by a different function $\varphi$.<br /><br />Now, suppose we measure distance between estimates using a Bregman divergence $\mathfrak{d}$. Then:<br /><ul><li>The crowd's prediction of the quantity is $c = \frac{1}{n}\sum^n_{i=1} j_i$.</li><li>The crowd's distance from the true quantity is $\mathrm{E}(c) = \mathfrak{d}(c, \tau)$.</li><li>$S_i$'s distance from the true quantity is $\mathrm{E}(j_i) = \mathfrak{d}(q_i, \tau)$</li><li>The average individual distance from the true quantity is $\frac{1}{n} \sum^n_{i=1} \mathrm{E}(j_i) = \frac{1}{n} \sum^n_{i=1} \mathfrak{d}(q_i, \tau)$.</li><li>The average individual distance from the crowd's estimate is $v = \frac{1}{n}\sum^n_{i=1} \mathfrak{d}(q_i, c)$.</li></ul> Given this, we have:<br /><br /><b>Generalized Diversity Prediction Theorem</b> $$\mathrm{E}(c) = \frac{1}{n} \sum^n_{i=1} \mathrm{E}(q_i) - v$$<br /><i>Proof.</i><br />\begin{eqnarray*}<br />& & \frac{1}{n} \sum^n_{i=1} \mathrm{E}(q_i) - v \\<br />& = & \frac{1}{n} \sum^n_{i=1} [ \mathfrak{d}(q_i, \tau) - \mathfrak{d}(q_i, c)] \\ <br />& = & \frac{1}{n} \sum^n_{i=1} [\varphi(q_i) - \varphi(\tau) - \varphi'(\tau)(q_i - \tau)] - [\varphi(q_i) - \varphi(c) - \varphi'(\tau)(q_i - c)] \\<br />& = & \frac{1}{n} \sum^n_{i=1} [\varphi(q_i)- \varphi(\tau) - \varphi'(\tau)(q_i - \tau) - \varphi(q_i)+ \varphi(c) + \varphi'(\tau)(q_i - c)] \\<br />& = & - \varphi(\tau) - \varphi'(\tau)((\frac{1}{n} \sum^n_{i=1} q_i) - \tau) + \varphi(c) + \varphi'(\tau)((\frac{1}{n} \sum^n_{i=1} q_i) - c) \\<br />& = & - \varphi(\tau) - \varphi'(\tau)(c - \tau) + \varphi(c) + \varphi'(\tau)(c - c) \\<br />& = & \varphi(c) - \varphi(\tau) - \varphi'(\tau)(c - \tau) \\<br />& = & \mathfrak{d}(c, \tau) \\<br />& = & \mathrm{E}(c)<br />\end{eqnarray*}<br />as required.Richard Pettigrewhttp://www.blogger.com/profile/07828399117450825734noreply@blogger.com0tag:blogger.com,1999:blog-4987609114415205593.post-42608390793708542062017-05-11T07:44:00.001+01:002017-05-11T08:06:14.999+01:00Reasoning Club Conference 2017<br />The <b>Fifth Reasoning Club Conference</b> will take place at the <a href="https://www.llc.unito.it/" target="_blank">Center for Logic, Language, and Cognition</a> in Turin on May 18-19, 2017. <br /><br />The <a href="https://www.kent.ac.uk/secl/researchcentres/reasoning/club/index.html" target="_blank">Reasoning Club</a> is a network of institutes, centres, departments, and groups addressing research topics connected to reasoning, inference, and methodology broadly construed. It issues the monthly gazette <i><a href="http://blogs.kent.ac.uk/thereasoner/about/" target="_blank">The Reasoner</a></i>. (Earlier editions of the meeting were held in <a href="http://www.vub.ac.be/CLWF/RC2012/" target="_blank">Brussels</a>, <a href="http://reasoningclubpisa.weebly.com/" target="_blank">Pisa</a>, <a href="https://reasoningclubkent.wordpress.com/" target="_blank">Kent</a>, and <a href="http://www.maths.manchester.ac.uk/news-and-events/events/fourth-reasoning-club-conf/" target="_blank">Manchester</a>.)<br /><br /><br /><br /><b>PROGRAM</b><br /><br /><br />THURSDAY, MAY 18<br /><br />Palazzo Badini<br />via Verdi 10, Torino<br />Sala Lauree di Psicologia (ground floor)<br /><br /><br />9:00 | welcome and coffee<br /><br />9:30 | greetings<br /> presentation of the new editorship of <i>The Reasoner</i><br /> (<b>Hykel HOSNI,</b> Milan)<br /><br /><br />Morning session – chair: Gustavo CEVOLANI (IMT Lucca)<br /><br /><br />10:00 | invited talk<br /><br /><b><a href="http://fitelson.org/" target="_blank">Branden FITELSON</a></b> (Northeastern University, Boston)<br /><br /><i>Two approaches to belief revision</i><br /><br />In this paper, we compare and contrast two methods for the qualitative revision of (viz., full) beliefs. The first (Bayesian) method is generated by a simplistic diachronic Lockean thesis requiring coherence with the agent's posterior credences after conditionalization. The second (Logical) method is the orthodox AGM approach to belief revision. Our primary aim will be to characterize the ways in which these two approaches can disagree with each other — especially in the special case where the agent's belief set is deductively cogent.<br /><br />(joint work with Ted Shear and Jonathan Weisberg)<br /><br /><br />11:00 | <b>Ted SHEAR</b> (Queensland) and <b>John QUIGGIN</b> (Queensland)<br /><i> </i><br /><i>A modal logic for reasonable belief</i><br /><br /><br />11:45 | <b>Nina POTH</b> (Edinburgh) and <b>Peter BRÖSSEL</b> (Bochum)<br /><br /><i>Bayesian inferences and conceptual spaces: Solving the complex-first paradox</i><br /><br /><br />12:30 | lunch break<br /><br /><br />Afternoon session I – chair: Peter BRÖSSEL (Bochum)<br /><br /><br />13:30 | invited talk<br /><br /><b><a href="https://www5.unitn.it/People/en/Web/Persona/PER0003393#INFO" target="_blank">Katya TENTORI</a></b> (University of Trento)<br /><br /><i>Judging forecasting accuracy </i><br /><i>How human intuitions can help improving formal models</i><br /><br />Most of the scoring rules that have been discussed and defended in the literature are not ordinally equivalent, with the consequence that, after the very same outcome has materialized, a forecast <i>X</i> can be evaluated as more accurate than <i>Y</i> according to one model but less accurate according to another. A question that naturally arises is therefore which of these models better captures people’s intuitive assessment of forecasting accuracy. To answer this question, we developed a new experimental paradigm for eliciting ordinal judgments of accuracy concerning pairs of forecasts for which various combinations of associations/dissociations between the Quadratic, Logarithmic, and Spherical scoring rules are obtained. We found that, overall, the Logarithmic model is the best predictor of people’s accuracy judgments, but also that there are cases in which these judgments — although they are normatively sound — systematically depart from what is expected by all the models. These results represent an empirical evaluation of the descriptive adequacy of the three most popular scoring rules and offer insights for the development of new formal models that might favour a more natural elicitation of truthful and informative beliefs from human forecasters.<br /><br />(joint work with Vincenzo Crupi and Andrea Passerini)<br /><br /><br />14:15 | <b>Catharine SAINT-CROIX</b> (Michigan)<br /><br /><i>Immodesty and evaluative uncertainty</i><br /><br /><br />15:15 | <b>Michael SCHIPPERS</b> (Oldenburg), <b>Jakob KOSCHOLKE</b> (Hamburg)<br /><br /><i>Against relative overlap measures of coherence</i><br /><br /><br />16:00 | coffee break<br /><br /><br />Afternoon session II – chair: Paolo MAFFEZIOLI (Torino)<br /><br /><br />16:30 | <b>Simon HEWITT</b> (Leeds)<br /><br /><i>Frege's theorem in plural logic</i><br /><br /><br />17:15 | <b>Lorenzo ROSSI</b> (Salzburg) and <b>Julien MURZI</b> (Salzburg)<br /><br /><i>Generalized Revenge</i><br /><br /><br /> <br />FRIDAY, MAY 19<br /><br />Campus Luigi Einaudi<br />Lungo Dora Siena 100/A<br />Sala Lauree Rossa<br />building D1 (ground floor)<br /><br /><br />9:00 | welcome and coffee<br /><br /><br />Morning session – chair: Jan SPRENGER (Tilburg)<br /><br /><br />9:30 | invited talk<br /><br /><b><a href="http://paulegre.free.fr/" target="_blank">Paul EGRÉ</a></b> (Institut Jean Nicod, Paris)<br /><br /><i>Logical consequence and ordinary reasoning</i><br /><br />The notion of logical consequence has been approached from a variety of angles. Tarski famously proposed a semantic characterization (in terms of truth-preservation), but also a structural characterization (in terms of axiomatic properties including reflexivity, transitivity, monotonicity, and other features). In recent work, E. Chemla, B. Spector and I have proposed a characterization of a wider class of consequence relations than Tarskian relations, which we call "respectable" (<i>Journal of Logic and Computation</i>, forthcoming). The class also includes non-reflexive and nontransitive relations, which can be motivated in relation to ordinary reasoning (such as reasoning with vague predicates, see Zardini 2008, Cobreros <i>et al</i>. 2012, or reasoning with presuppositions, see Strawson 1952, von Fintel 1998, Sharvit 2016). Chemla <i>et al</i>.'s characterization is partly structural, and partly semantic, however. In this talk I will present further advances toward a purely structural characterization of such respectable consequence relations. I will discuss the significance of this research program toward bringing logic closer to ordinary reasoning.<br /><br />(joint work with Emmanuel Chemla and Benjamin Spector)<br /><br /><br />10:30 | <b>Niels SKOVGAARD-OLSEN</b> (Freiburg)<br /><br /><i>Conditionals and multiple norm conflicts</i><br /><br /><br />11:15 | <b>Luis ROSA</b> (Munich)<br /><br /><i>Knowledge grounded on pure reasoning</i><br /><br /><br />12:00 | lunch break<br /><br /><br />Afternoon session I – chair: Steven HALES (Bloomsburg)<br /><br /><br />13:30 | invited talk<br /><br /><a href="http://lhenderson.org/" target="_blank"><b>Leah HENDERSON</b></a> (University of Groningen)<br /><br /><i>The unity of explanatory virtues</i><br /><br />Scientific theory choice is often characterised as an Inference to the Best Explanation (IBE) in which a number of distinct explanatory virtues are combined and traded off against one another. Furthermore, the epistemic significance of each explanatory virtue is often seen as highly case-specific. But are there really so many dimensions to theory choice? By considering how IBE may be situated in a Bayesian framework, I propose a more unified picture of the virtues in scientific theory choice.<br /><br /><br />14:30 | <b>Benjamin EVA</b> (Munich) and <b>Reuben STERN</b> (Munich)<br /><br /><i>Causal explanatory power</i><br /><br /><br />15:15 | coffee break<br /><br /><br />Afternoon session II – chair: Jakob KOSCHOLKE (Hamburg)<br /><br /><br />16:00 | <b>Barbara OSIMANI</b> (Munich)<br /><br /><i>Bias, random error, and the variety of evidence thesis</i><br /><br /><br />16:45 | <b>Felipe ROMERO</b> (Tilburg) and <b>Jan SPRENGER</b> (Tilburg)<br /><br /><i>Scientific self-correction: The Bayesian way</i><br /><br /><br /><br />ORGANIZING COMMITTEE<br /><br />Gustavo Cevolani (Torino)<br />Vincenzo Crupi (Torino)<br />Jason Konek (Kent)<br />Paolo Maffezioli (Torino)<br /><br /><br /><br />For any queries please contact Vincenzo Crupi (<a class="mailto" href="mailto:vincenzo.crupi@unito.it">vincenzo.crupi@unito.it</a><span class="mailto"></span>) or Jason Konek (<a class="mailto" href="mailto:jpkonek@ksu.edu">jpkonek@ksu.edu</a><span class="mailto"></span>).<br /><br /><br />Vincenzo Crupihttp://www.blogger.com/profile/08069145846190162517noreply@blogger.com0tag:blogger.com,1999:blog-4987609114415205593.post-43751540218688285602017-04-08T07:21:00.001+01:002017-04-08T07:22:37.836+01:00Formal Truth Theories workshop, Warsaw (Sep. 28-30)<div dir="ltr" style="text-align: left;" trbidi="on"><span style="text-align: justify;">Cezary Cieslinski and his team organize a workshop on formal theories of truth in Warsaw, to take place 28-30 September 2017. The invites include Dora Achourioti, Ali Enayat, Kentaro Fujimoto, Volker Halbach, Graham Leigh, and Albert Visser. Submission deadline is May 15. More details </span><a href="http://formaltruththeories.pl/call-for-papers/" style="text-align: justify;">here</a><span style="text-align: justify;">.</span></div>Rafal Urbaniakhttp://www.blogger.com/profile/10277466578023939272noreply@blogger.com0tag:blogger.com,1999:blog-4987609114415205593.post-14284388941774572272017-03-19T18:17:00.004+00:002017-03-19T18:17:51.184+00:00Aggregating incoherent credences: the case of geometric poolingIn the last few posts (<a href="http://m-phi.blogspot.co.uk/2017/03/a-dilemma-for-judgment-aggregation.html" target="_blank">here</a> and <a href="http://m-phi.blogspot.co.uk/2017/03/a-little-more-on-aggregating-incoherent.html" target="_blank">here</a>), I've been exploring how we should extend the probabilistic aggregation method of linear pooling so that it applies to groups that contain incoherent individuals (which is, let's be honest, just about all groups). And our answer has been this: there are three methods -- linear-pool-then-fix, fix-then-linear-pool, and fix-and-linear-pool-together -- and they agree with one another just in case you fix incoherent credences by taking the nearest coherent credences as measured by squared Euclidean distance. In this post, I ask how we should extend the probabilistic aggregation method of geometric pooling.<br /><br />As before, I'll just consider the simplest case, where we have two individuals, Adila and Benoit, and they have credence functions -- $c_A$ and $c_B$, respectively -- that are defined for a proposition $X$ and its negation $\overline{X}$. Suppose $c_A$ and $c_B$ are coherent. Then geometric pooling says:<br /><br /><b>Geometric pooling </b>The aggregation of $c_A$ and $c_B$ is $c$, where<br /><ul><li>$c(X) = \frac{c_A(X)^\alpha c_B(X)^{1-\alpha}}{c_A(X)^\alpha c_B(X)^{1-\alpha} + c_A(\overline{X})^\alpha c_B(\overline{X})^{1-\alpha}}$</li><li>$c(\overline{X}) = \frac{c_A(\overline{X})^\alpha c_B(\overline{X})^{1-\alpha}}{c_A(X)^\alpha c_B(X)^{1-\alpha} + c_A(\overline{X})^\alpha c_B(\overline{X})^{1-\alpha}}$</li></ul>for some $0 \leq \alpha \leq 1$.<br /><br />Now, in the case of linear pooling, if $c_A$ or $c_B$ is incoherent, then it is most likely that any linear pool of them is also incoherent. However, in the case of geometric pooling, this is not the case. Linear pooling requires us to take a weighted arithmetic average of the credences we are aggregating. If those credences are coherent, so is their weighted arithmetic average. Thus, if you are considering only coherent credences, there is no need to normalize the weighted arithmetic average after taking it to ensure coherence. However, even if the credences we are aggregating are coherent, their weighted geometric averages are not. Thus, geometric pooling requires that we first take the weighted geometric average of the credences we are pooling and then normalize the result, to ensure that the result is coherent. But this trick works whether or not the original credences are coherent. Thus, we need do nothing more to geometric pooling in order to apply it to incoherent agents.<br /><br />Nonetheless, questions still arise. What we have shown is that, if we first geometrically pool our two incoherent agents, then the result is in fact coherent and so we don't need to undertake the further step of fixing up the credences to make them coherent. But what if we first choose to fix up our two incoherent agents so that they are coherent, and then geometrically pool them? Does this give the same answer as if we just pooled the incoherent agents? And, similarly, what if we decide to fix and pool together?<br /><br />Interestingly, the results are exactly the reverse of the results in the case of linear pooling. In that case, if we fix up incoherent credences by taking the coherent credences that minimize squared Euclidean distance, then all three methods agree, whereas if we fix them up by taking the coherent credences that minimize generalized Kullback-Leibler divergence, then sometimes all three methods disagree. In the case of geometric pooling, it is the opposite. Fixing up using generalized KL divergence makes all three methods agree -- that is, pool, fix-then-pool, and fix-and-pool-together all give the same result when we use GKL to measure distance. But fixing up using squared Euclidean distance leads to three separate methods that sometimes all disagree. That is, GKL is the natural distance measure to accompany geometric pooling, while SED is the natural measure to accompany linear pooling.Richard Pettigrewhttp://www.blogger.com/profile/07828399117450825734noreply@blogger.com0tag:blogger.com,1999:blog-4987609114415205593.post-35259569628408363342017-03-17T12:05:00.000+00:002017-03-17T12:05:56.634+00:00A little more on aggregating incoherent credencesLast week, I <a href="https://m-phi.blogspot.co.uk/2017/03/a-dilemma-for-judgment-aggregation.html" target="_blank">wrote</a> about a problem that arises if you wish to aggregate the credal judgments of a group of agents when one or more of those agents has incoherent credences. I focussed on the case of two agents, Adila and Benoit, who have credence functions $c_A$ and $c_B$, respectively. $c_A$ and $c_B$ are defined over just two propositions, $X$ and its negation $\overline{X}$.<br /><br />I noted that there are two natural ways to aggregate $c_A$ and $c_B$ for someone who adheres to Probabilism, the principle that says that credences should be coherent. You might first fix up Adila's and Benoit's credences so that they are coherent, and then aggregate them using linear pooling -- let's call that <i>fix-</i><i>then-pool</i>. Or you might aggregate Adila's and Benoit's credences using linear pooling, and then fix up the pooled credences so that they are coherent -- let's call that <i>pool-</i><i>then-fix</i>. And I noted that, for some natural ways of fixing up incoherent credences, fix-then-pool gives a different result from pool-then-fix. This, I claimed, creates a dilemma for the person doing the aggregating, since there seems to be no principled reason to favour either method.<br /><br />How do we fix up incoherent credences? Well, a natural idea is to find the coherent credences that are closest to them and adopt those in their place. This obviously requires a measure of distance between two credence functions. In last week's post, I considered two:<br /><br /><b>Squared Euclidean Distance (SED)</b> For two credence functions $c$, $c'$ defined on a set of propositions $X_1$, $\ldots$, $X_n$,$$SED(c, c') = \sum^n_{i=1} (c(X_i) - c'(X_i))^2$$<br /><br /><b>Generalized Kullback-Leibler Divergence (GKL)</b> For two credence functions $c$, $c'$ defined on a set of propositions $X_1$, $\ldots$, $X_n$,$$GKL(c, c') = \sum^n_{i=1} c(X_i) \mathrm{log}\frac{c(X_i)}{c'(X_i)} - \sum^n_{i=1} c(X_i) + \sum^n_{i=1} c'(X_i)$$<br /><br />If we use $SED$ when we are fixing incoherent credences -- that is, if we fix an incoherent credence function $c$ by adopting the coherent credence function $c^*$ for which $SED(c^*, c)$ is minimal -- then fix-then-pool gives <i>the same results</i> as pool-then-fix.<br /><br />If we use GKL when we are fixing incoherent credences -- that is, if we fix an incoherent credence function $c$ by adopting the coherent credence function $c^*$ for which $GKL(c^*, c)$ is minimal -- then fix-then-pool gives <i>different results</i> from pool-then-fix.<br /><br />Since last week's post, I've been reading <a href="https://www.princeton.edu/~osherson/papers/preddAgg.pdf" target="_blank">this</a> paper by <a href="http://www.rand.org/about/people/p/predd_joel_b.html" target="_blank">Joel Predd</a>, <a href="http://www.princeton.edu/~osherson/" target="_blank">Daniel Osherson</a>, <a href="https://www.princeton.edu/~kulkarni/" target="_blank">Sanjeev Kulkarni</a>, and <a href="http://ee.princeton.edu/people/faculty/h-vincent-poor" target="_blank">Vincent Poor</a>. They suggest that we pool and fix incoherent credences in one go using a method called the Coherent Aggregation Principle (CAP), formulated in <a href="http://www.sciencedirect.com/science/article/pii/S0899825606000613" target="_blank">this</a> paper by <a href="http://www.princeton.edu/~osherson/" target="_blank">Daniel Osherson</a> and <a href="http://www.cs.rice.edu/~vardi/" target="_blank">Moshe Vardi</a>. In its original version, CAP says that we should aggregate Adila's and Benoit's credences by taking the coherent credence function $c$ such that the sum of the distance of $c$ from $c_A$ and the distance of $c$ from $c_B$ is minimized. That is,<br /><br /><b>CAP</b> Given a measure of distance $D$ between credence functions, we should pick that coherent credence function $c$ such that minimizes $D(c, c_A) + D(c, c_B)$.<br /><br />As they note, if we take $SED$ to be our measure of distance, then this method generalizes the aggregation procedure on coherent credences that just takes straight averages of credences. That is, CAP entails unweighted linear pooling:<br /><br /><b>Unweighted Linear Pooling</b> If $c_A$ and $c_B$ are coherent, then the aggregation of $c_A$ and $c_B$ is $$\frac{1}{2} c_A + \frac{1}{2}c_B$$ <br /><br />We can generalize this result a little by taking a weighted sum of the distances, rather than the straight sum.<br /><br /><b>Weighted CAP </b>Given a measure of distance $D$ between credence functions, and given $0 \leq \alpha leq 1$, we should pick the coherent credence function $c$ that minimizes $\alpha D(c, c_A) + (1-\alpha)D(c, c_B)$.<br /><br />If we take $SED$ to measure the distance between credence functions, then this method generalizes linear pooling. That is, Weighted CAP entails linear pooling:<br /><br /><b>Linear Pooling </b>If $c_A$ and $c_B$ are coherent, then the aggregation of $c_A$ and $c_B$ is $$\alpha c_A + (1-\alpha)c_B$$ for some $0 \leq \alpha \leq 1$.<br /><br />What's more, when distance is measured by $SED$, Weighted CAP agrees with fix-then-pool and with pool-then-fix (providing the fixing is done using $SED$ as well). Thus, when we use $SED$, all of the methods for aggregating incoherent credences that we've considered agree. In particular, they all recommend the following credence in $X$: $$\frac{1}{2} + \frac{\alpha(c_A(X)-c_A(\overline{X})) + (1-\alpha)(c_B(X) - c_B(\overline{X}))}{2}$$ <br /><br />However, the story is not nearly so neat and tidy if we measure the distance between two credence functions using $GKL$. Here's the credence in $X$ recommended by fix-then-pool:$$\alpha \frac{c_A(X)}{c_A(X) + c_A(\overline{X})} + (1-\alpha)\frac{c_B(X)}{c_B(X) + c_B(\overline{X})}$$ Here's the credence in $X$ recommended by pool-then-fix: $$\frac{\alpha c_A(X) + (1-\alpha)c_B(X)}{\alpha (c_A(X) + c_A(\overline{X})) + (1-\alpha)(c_B(X) + c_B(\overline{X}))}$$ And here's the credence in $X$ recommended by Weighted CAP: $$\frac{c_A(X)^\alpha c_B(X)^{1-\alpha}}{c_A(X)^\alpha c_B(X)^{1-\alpha} + c_A(\overline{X})^\alpha c_B(\overline{X})^{1-\alpha}}$$ For many values of $\alpha$, $c_A(X)$, $c_A(\overline{X})$, $c_B(X)$, $c_B(\overline{X})$ these will give three distinct results. <br /><br /><br />Richard Pettigrewhttp://www.blogger.com/profile/07828399117450825734noreply@blogger.com10tag:blogger.com,1999:blog-4987609114415205593.post-3826985268867587772017-03-10T17:08:00.000+00:002017-03-13T12:29:10.557+00:00A dilemma for judgment aggregationLet's suppose that Adila and Benoit are both experts, and suppose that we are interested in gleaning from their opinions about a certain proposition $X$ and its negation $\overline{X}$ a judgment of our own about $X$ and $\overline{X}$. Adila has credence function $c_A$, while Benoit has credence function $c_B$. One standard way to derive our own credence function on the basis of this information is to take a <i>linear pool</i> or <i>weighted average</i> of Adila's and Benoit's credence functions. That is, we assign a weight to Adila ($\alpha$) and a weight to Benoit ($1-\alpha$) and we take the linear combination of their credence functions with these weights to be our credence function. So my credence in $X$ will be $\alpha c_A(X) + (1-\alpha) c_B(X)$, while my credence in $\overline{X}$ will be $\alpha c_A(\overline{X}) + (1-\alpha)c_B(\overline{X})$.<br /><br />But now suppose that either Adila or Benoit or both are probabilistically incoherent -- that is, either $c_A(X) + c_A(\overline{X}) \neq 1$ or $c_B(X) + c_B(\overline{X}) \neq 1$ or both. Then, it may well be that the linear pool of their credence functions is also probabilistically incoherent. That is,<br /><br />$(\alpha c_A(X) + (1-\alpha) c_B(X)) + (\alpha c_A(\overline{X}) + (1-\alpha)c_B(\overline{X})) = $<br /><br />$\alpha (c_A(X) + c_A(\overline{X})) + (1-\alpha)(c_B(X) + c_B(\overline{X})) \neq 1$<br /><br />But, as an adherent of Probabilism, I want my credences to be probabilistically coherent. So, what should I do?<br /><br />A natural suggestion is this: take the aggregated credences in $X$ and $\overline{X}$, and then take the closest pair of credences that are probabilistically coherent. Let's call that process the <i>coherentization</i> of the incoherent credences. Of course, to carry out this process, we need a measure of distance between any two credence functions. Luckily, that's easy to come by. Suppose you are an adherent of Probabilism because you are persuaded by the so-called <a href="http://m-phi.blogspot.co.uk/2013/05/joyces-argument-for-probabilism_24.html" target="_blank">accuracy dominance arguments</a> for that norm. According to these arguments, we measure the accuracy of a credence function by measuring its proximity to the ideal credence function, which we take to be the credence function that assigns credence 1 to all truths and credence 0 to all falsehoods. That is, we generate a measure of the accuracy of a credence function from a measure of the distance between two credence functions. Let's call that distance measure $D$. In the accuracy-first literature, there are reasons for taking $D$ to be a so-called <a href="http://m-phi.blogspot.co.uk/2014/04/how-should-we-measure-accuracy-in.html" target="_blank"><i>Bregman divergence</i></a>. Given such a measure $D$, we might be tempted to say that, if Adila and/or Benoit are incoherent and our linear pool of their credences is incoherent, we should <i>not</i> adopt that linear pool as our credence function, since it violates Probabilism, but rather we should find the nearest coherent credence function to the incoherent linear pool, relative to $D$, and adopt that. That is, we should adopt credence function $c$ such that $D(c, \alpha c_A + (1-\alpha)c_B)$ is minimal. So, we should first take the linear pool of Adila's and Benoit's credences; and then we should make them coherent.<br /><br />But this raises the question: why not first make Adila's and Benoit's credences coherent, and then take the linear pool of the resulting credence functions? Do these two procedures give the same result? That is, in the jargon of algebra, does linear pooling commute with our procedure for making incoherent credences coherent? Does linear pooling commute with coherentization? If so, there is no problem. But if not, our judgment aggregation method faces a dilemma: in which order should the procedures be performed: aggregate, then make coherent; or make coherent, then aggregate.<br /><br />It turns out that whether or not the two commute depends on the distance measure in question. First, suppose we use the so-called <i>squared Euclidean distance </i>measure. That is, for two credence functions $c$, $c'$ defined on a set of propositions $X_1$, $\ldots$, $X_n$,$$SED(c, c') = \sum^n_{i=1} (c(X_i) - c'(X_i))^2$$ In particular, if $c$, $c'$ are defined on $X$, $\overline{X}$, then the distance from $c$ to $c'$ is $$(c(X) -c'(X))^2 + (c(\overline{X})-c'(\overline{X})^2$$ And note that this generates the <i>quadratic scoring rule</i>, which is strictly proper:<br /><ul><li>$\mathfrak{q}(1, x) = (1-x)^2$</li><li>$\mathfrak{q}(0, x) = x^2$ </li></ul>Then, in this case, linear pooling commutes with our procedure for making incoherent credences coherent. Given a credence function $c$, let $c^*$ be the closest coherent credence function to $c$ relative to $SED$. Then:<br /><br /><b>Theorem 1 </b>For all $\alpha$, $c_A$, $c_B$, $$\alpha c^*_A + (1-\alpha)c^*_B = (\alpha c_A + (1-\alpha)c_B)^*$$<br /><br />Second, suppose we use the <i>generalized Kullback-Leibler divergence</i> to measure the distance between credence functions. That is, for two credence functions $c$, $c'$ defined on a set of propositions $X_1$, $\ldots$, $X_n$,$$GKL(c, c') = \sum^n_{i=1} c(X_i) \mathrm{log}\frac{c(X_i)}{c'(X_i)} - \sum^n_{i=1} c(X_i) + \sum^n_{i=1} c'(X_i)$$ Thus, for $c$, $c'$ defined on $X$, $\overline{X}$, the distance from $c$ to $'$ is $$c(X)\mathrm{log}\frac{c(X)}{c'(X)} + c(\overline{X})\mathrm{log}\frac{c(\overline{X})}{c'(\overline{X})} - c(X) - c(\overline{X}) + c'(X) + c'(\overline{X})$$ And note that this generates the following scoring rule, which is strictly proper:<br /><ul><li>$\mathfrak{b}(1, x) = \mathrm{log}(\frac{1}{x}) - 1 + x$</li><li>$\mathfrak{b}(0, x) = x$ </li></ul>Then, in this case, linear pooling <i>does not </i>commute with our procedure for making incoherent credences coherent. Given a credence function $c$, let $c^+$ be the closest coherent credence function to $c$ relative to $GKL$. Then:<br /><br /><b>Theorem 2</b> For many $\alpha$, $c_A$, $c_B$, $$\alpha c^+_A + (1-\alpha)c^+_B \neq (\alpha c_A + (1-\alpha)c_B)^+$$<br /><br /><i>Proofs of Theorems 1 and 2</i>. With the following two key facts in hand, the results are straightforward. If $c$ is defined on $X$, $\overline{X}$:<br /><ul><li>$c^*(X) = \frac{1}{2} + \frac{c(X)-c(\overline{X})}{2}$, $c^*(\overline{X}) = \frac{1}{2} - \frac{c(X) - c(\overline{X})}{2}$.</li><li>$c^+(X) = \frac{c(X)}{c(X) + c(\overline{X})}$, $c^+(\overline{X}) = \frac{c(\overline{X})}{c(X) + c(\overline{X})}$.</li></ul><br />Thus, Theorem 1 tells us that, if you measure distance using SED, then no dilemma arises: you can aggregate and then make coherent, or you can make coherent and then aggregate -- they will have the same outcome. However, Theorem 2 tells us that, if you measure distance using GKL, then a dilemma does arise: aggregating and then making coherent gives a different outcome from making coherent and then aggregating.<br /><br />Perhaps this is an argument against GKL and in favour of SED? You might think, of course, that the problem arises here only because SED is somehow naturally paired with linear pooling, while GKL might be naturally paired with some other method of aggregation such that that method of aggregation commutes with coherentization relative to GKL. That may be so. But bear in mind that there is a <a href="https://dl.dropboxusercontent.com/u/9797023/Papers/linear-pooling.pdf" target="_blank">very general argument</a> in favour of linear pooling that applies whichever distance measure you use: it says that if you do not aggregate a set of probabilistic credence functions using linear pooling then there is some linear pool that each of those credence functions expects to be more accurate than your aggregation. So I think this response won't work.Richard Pettigrewhttp://www.blogger.com/profile/07828399117450825734noreply@blogger.com2tag:blogger.com,1999:blog-4987609114415205593.post-85981736128637262172017-03-01T11:54:00.000+00:002017-03-02T10:04:46.144+00:00More on the Swamping Problem for ReliabilismIn a <a href="http://m-phi.blogspot.co.uk/2017/02/the-swamping-problem-for-reliabilism.html" target="_blank">previous post</a>, I floated the possibility that we might use recent work in decision theory by Orri Stefánsson and Richard Bradley to solve the so-called Swamping Problem for veritism. In this post, I'll show that, in fact, this putative solution can't work.<br /><br />According to the Swamping Problem, I value beliefs that are both justified and true more than I value beliefs that are true but unjustified; and, we might suppose, I value beliefs that are justified but false more than I value beliefs that are both unjustified and false. In other words, I care about the truth or falsity or my beliefs; but I also care about their justification. Now, suppose we take the view, which I defend in this earlier post, that a belief in a proposition is more justified the higher the objective probability of that proposition given the grounds for that belief. Thus, for instance, if I base my belief that there was a firecrest in front of me until a few seconds ago on the fact that I saw a flash of orange as the bird flew off, then my belief is more justified the higher the objective probability that it was a firecrest given that I saw a flash of orange. And, whether there really was a firecrest in front of me, the value of my belief increases as the objective probability that there was given I saw a flash of orange increases.<br /><br />Let's translate this into Stefánsson and Bradley's version of Richard Jeffrey's decision theory. Here are the components:<br /><ul><li>a Boolean algebra $F$</li><li>a desirability function $V$, defined on $F$</li><li>a credence function $c$, defined on $F$</li></ul>The fundamental assumption of Jeffrey's framework is this:<br /><br /><b>Desirability</b> For any partition $X_1$, ..., $X_n$, $$V(X) = \sum^n_{i=1} c(X_i | X)V(X\ \&\ X_i)$$ And, further, we assume Lewis' Principal Principle, where $C^x_X$ is the proposition that says that $X$ has objective probability $x$:<br /><br /><b>Principal Principle</b> $$c(X_j | \bigwedge^n_{i=1} C^{x_i}_{X_i}) = x_i$$ Now, suppose I believe proposition $X$. Then, from what we said above, we can extract the following:<br /><ol><li>$V(X\ \&\ C^x_X)$ is a monotone increasing and non-constant function of $x$, for $0 \leq x \leq 1$</li><li>$V(X\ \&\ C^x_X)$ is a monotone increasing and non-constant function of $x$, for $0 \leq x \leq 1$</li><li>$V(X\ \&\ C^x_X) > V(\overline{X}\ \&\ C^x_X)$, for $0 \leq x \leq 1$.</li></ol>Given this, the Swamping Problem usually proceeds by identifying a problem with (1) and (2) as follows. It begins by claiming that the principle that Stefánsson and Bradley, in another context, call Chance Neutrality is indeed a requirement of rationality:<br /><br /><b>Chance Neutrality</b> $$V(X_j\ \&\ \bigwedge^n_{i=1} C^{x_i}_{X_i}) = V(X)$$ Or, equivalently:<br /><br /><b>Chance Neutrality$^*$</b> $$V(X_j\ \&\ \bigwedge^n_{i=1} C^{x_i}_{X_i}) = V(X_j\ \&\ \bigwedge^n_{i=1} C^{x'_i}_{X_i})$$ This says that the truth of $X$ swamps the chance of $X$ in determining the value of an outcome. With the truth of $X$ fixed, its chance of being true becomes irrelevant.<br /><br />The Swamping Problem then continues by noting that, if (1) or (2) is true, then my desirability function violates Chance Neutrality. Therefore, it concludes, I am irrational.<br /><br />However, as Stefánsson and Bradley show, Chance Neutrality is not a requirement of rationality. To do this, they consider a further putative principle, which they call Linearity:<br /><br /><b>Linearity</b> $$V(\bigwedge^n_{i=1} C^{x_i}_{X_i}) = \sum^n_{i=1} x_iV(X_i)$$ Now, Stefánsson and Bradley show<br /><br /><b>Theorem</b> <i>Suppose Desirability and the Principal Principle. Then Chance Neutrality entails Linearity.</i><br /><br />They then argue that, since Linearity is not a rational requirement, neither can Chance Neutrality be -- since the Principal Principle is a rational requirement, if Chance Neutrality were too, then Linearity would be; and Linearity is not because it is violated in cases of rational preference, such as in the Allais paradox.<br /><br />Thus, the Swamping Problem in its original form fails. It relies on Chance Neutrality, but Chance Neutrality is not a requirement of rationality. Of course, if we could prove a sort of converse of Stefánsson and Bradley's result, and show that, in the presence of the Principal Principle, Linearity entails Chance Neutrality, then we could show that a value function satisfying (1) is irrational. But we can't prove that converse.<br /><br />Nonetheless, there is still a problem. For we can show that, in the presence of Desirability and the Principal Principle, Linearity entails that there is no desirability function $V$ that satisfies (1). Of course, given that Linearity is not a requirement of rationality, this does not tell us very much at the moment. But it does when we realise that, while Linearity is not required by rationality, veritists who accept the reliabilist account of justification given above typically do have a desirability function that satisfies Linearity. After all, they value a justified belief because it is reliable -- that is, it has high objective expected epistemic value. That is, they value a belief at its expected epistemic value, which is precisely what Linearity says.<br /><br /><b>Theorem</b> <i>Suppose $X$ is a proposition in $F$. And suppose $V$ satisfies Desirability, Principal Principle, and Linearity. Then it is not possible that the following are all satisfied:</i><i> </i><br /><ul><li><i>(Monotonicity) $V(X\ \&\ C^x_X)$ and $V(\overline{X}\ \&\ C^x_X)$ are both monotone increasing and non-constant functions of $x$ on $(0, 1)$;</i></li><li><i>(Betweenness) There is $0 < x < 1$ such that $V(X) < V(X\ \&\ C^x_X)$</i>.</li></ul><br /><i>Proof</i>. We suppose Desirability, Principal Principle, and Linearity throughout. We proceed by reductio. We make the following abbreviations:<br /><ul><li>$f(x) = V(X\ \&\ C^x_X)$</li><li>$g(x) = V(\overline{X}\ \&\ C^x_X)$</li><li>$F = V(X)$</li><li>$G = V(\overline{X})$</li></ul>By assumption, we have:<br /><ul><li>(1f) $f$ is a monotone increasing and non-constant function on $(0, 1)$ (by Monotonicity);</li><li>(1g) $g$ is a monotone increasing and non-constant function on $(0, 1)$ (by Monotonicity);</li><li>(2) There is $0 < x < 1$ such that $F < f(x)$ (by Betweenness).</li></ul>By Desirability, we have $$V(C^x_X) = c(X | C^x_X)V(X\ \&\ C^x_X) + c(\overline{X} | C^x_X) V(\overline{X}\ \&\ C^x_X)$$ By this and the Principal Principle, we have $$V(C^x_X)= x V(X\ \&\ C^x_X) + (1 - x)V(\overline{X}\ \&\ C^x_X)$$ So $V(C^x_X) = xf(x) + (1-x)g(x)$. By Linearity, we have $$V(C^x_X) = x V(X) + (1-x)V(\overline{X})$$ So $V(C^x_X) = xF + (1-x)G$. Thus, for all $0 \leq x \leq 1$, $$x V(X) + (1-x)V(\overline{X}) = x V(X\ \&\ C^x_X) + (1 - x)V(\overline{X}\ \&\ C^x_X)$$ That is,<br /><ul><li>(3) $xF + (1-x)G = xf(x) + (1-x)g(x)$</li></ul>Now, by (3), we have $$g(x) = \frac{x}{1-x}(F - f(x)) + G$$ for $0 \leq x < 1$. Now, by (1f) and (2), there are $x < y < 1$ such that $F < f(x) \leq f(y)$. Thus, $F - f(y) \leq F - f(x) < 0$. And so $$\frac{y}{1-y}(F-f(y)) + G < \frac{x}{1-x}(F-f(x)) + G < 0$$ And thus $g(y) < g(x)$. But this contradicts (1g). Thus, there can be no such pair of functions $f$, $g$. Thus, there can be no such $V$, as required. $\Box$<br /><br /><br /><br /><br />Richard Pettigrewhttp://www.blogger.com/profile/07828399117450825734noreply@blogger.com0tag:blogger.com,1999:blog-4987609114415205593.post-37519951788502376842017-02-12T19:14:00.003+00:002017-02-13T09:27:54.652+00:00Chance Neutrality and the Swamping Problem for ReliabilismReliabilism about justified belief comes in two varieties: process reliabilism and indicator reliabilism. According to process reliabilism, a belief is justified if it is formed by a process that is likely to produce truths; according to indicator reliabilism, a belief is justified if it likely to be true given the ground on which the belief is based. Both are natural accounts of justification for a veritist, who holds that the sole fundamental source of epistemic value for a belief is its truth.<br /><br />Against veritists who are reliabilists, opponents raise the Swamping Problem. This begins with the observation that we prefer a justified true belief to an unjustified true belief; we ascribe greater value to the former than to the latter; we would prefer to have the former over the latter. But, if reliablism is true, this means that we prefer a belief that is true and had a high chance of being true over a belief that is true and had a low chance of being true. For a veritist, this means that we prefer a belief that has maximal epistemic value and had a high chance of having maximal epistemic value over a belief that has maximal epistemic value and had a low chance of having maximal epistemic value. And this is irrational, or so the objection goes. It is only rational to value a high chance of maximal utility when the actual utility is not known; once the actual utility is known, this 'swamps' any consideration of the chance of that utility. For instance, suppose I find a lottery ticket on the street; I know that it comes either from a 10-ticket lottery or from a 100-ticket lottery; both lotteries pay out the same amount to the holder of the winning ticket; and I know the outcome of neither lottery. Then it is rational for me to hope that the ticket I hold belongs to the smaller lottery, since that would maximise my chance of winning and thus maximise the expected utility of the ticket. But once I know that the lottery ticket I found is the winning ticket, it is irrational to prefer that it came from the smaller lottery --- my knowledge that it's the winner 'swamps' the information about how likely it was to be the winner. This is known variously as the Swamping Problem or the Value Problem for reliabilism about justification (Zagzebski 2003, Kvanvig 2003).<br /><br />The central assumption of the swamping problem is a principle that, in a different context, H. Orri Stefánsson and Richard Bradley call Chance Neutrality (Stefánsson & Bradley 2015). They state it precisely within the framework of Richard Jeffrey's decision theory (Jeffrey 1983). In that framework, we have a desirability function $V$ and a credence function $c$, both of which are defined on an algebra of propositions $\mathcal{F}$. $V(A)$ measures how strongly our agent desires $A$, or how greatly she values it. $c(A)$ measures how strongly she believes $A$, or her credence in $A$. The central principle of the decision theory is this:<br /><br /><b>Desirability</b> If the propositions $A_1$, $\ldots$, $A_n$ form a partition of the proposition $X$, then $$V(X) = \sum^n_{i=1} c(A_i | X) V(A_i)$$<br /><br />Now, suppose the algebra on which $V$ and $c$ are defined includes some propositions that concern the objective probabilities of other propositions in the algebra. Then:<br /><br /><b>Chance Neutrality </b> Suppose $X$ is in the partition $X_1$, \ldots, $X_n$. And suppose $0 \leq \alpha_1, \ldots, \alpha_n \leq 1$ and $\sum^n_{i=1} \alpha = 1$. Then $$V(X\ \&\ \bigwedge^n_{i=1} \mbox{Objective probability of $X_i$ is $\alpha_i$}) = V(X)$$<br /><br />That is, information about the outcome of the chance process that picks between $X_1$, $\ldots$, $X_n$ `swamps' information about the chance process in our evaluation, which is recorded in $V$. A simple consequence of this: if $0 \leq \alpha_1, \alpha'_1 \ldots, \alpha_n, \alpha'_n \leq 1$ and $\sum^n_{i=1} \alpha_i = 1$ and $\sum^n_{i=1} \alpha'_i = 1$, then<br /><br />$V(X\ \&\ \bigwedge^n_{i=1} \mbox{Objective probability of $X_i$ is $\alpha_i$}) = $<br />$V(X\ \&\ \bigwedge^n_{i=1} \mbox{Objective probability of $X_i$ is $\alpha'_i$})$<br /><br />Now consider the particular case of this that is used in the Swamping Problem. I believe $X$ on the basis of ground $g$. I assign greater value to $X$ being true and justified than I do to $X$ being true and unjustified. That is, given the reliabilist's account of justification, if $\alpha$ is a probability that lies above the threshold for justification and $\alpha'$ is a probability that lies below that threshold --- for the veritist, $\alpha' < \frac{W}{R+W} < \alpha$ --- then<br /><br />$V(X\ \&\ \mbox{Objective probability of $X$ given I have $g$ is $\alpha'$}) <$<br />$V(X\ \&\ \mbox{Objective probability of $X$ given I have $g$ is $\alpha$})$<br /><br />And of course this violates Chance Neutrality. <br /><br />Thus, the Swamping Problem stands or falls with the status of Chance Neutrality. Is it a requirement of rationality? Stefánsson and Bradley argue that it is not (Section 3, Stefánsson & Bradley 2015). They show that, in the presence of the Principal Principle, Chance Neutrality entails a principle called Linearity; and they claim that Linearity is not a requirement of rationality. If it is permissible to violate Linearity, then it cannot be a requirement to satisfy a principle that entails it. So Chance Neutrality is not a requirement of rationality.<br /><br />In this context, the Principal Principle runs as follows:<br /><br /><b>Principal Principle</b> $$c(X_i | \bigwedge^n_{i=1} \mbox{Objective probability of $X_i$ is $\alpha_i$}) = \alpha_i$$<br /><br />That is, an agent's credence in $X_i$, conditional on information that gives the objective probability of $X_i$ and other members of a partition to which it belongs, should be equal to the objective probability of $X_i$. And Linearity is the following principle:<br /><br /><b>Linearity</b> $$V(\bigwedge^n_{i=1} \mbox{Objective probability of $X_i$ is $\alpha_i$}) = \sum^n_{i=1} \alpha_iV(X_i)$$<br /><br />That is, an agent should value a lottery at the expected value of its outcome. Now, as is well known, real agents often violate Linearity (Buchak 2014). The most famous violations are known as the Allais preferences (Allais 1953). Suppose there are 100 tickets numbered 1 to 100. One ticket will be drawn and you will be given a prize depending on which option you have chosen from $L_1$, $\ldots$, $L_4$:<br /><ul><li>$L_1$: if ticket 1-89, £1m; if ticket 90-99, £1m; if ticket 100, £1m.</li><li>$L_2$: if ticket 1-89, £1m; if ticket 90-99, £5m; if ticket 100, £0m</li><li>$L_3$: if ticket 1-89, £0m; if ticket 90-99, £1m; if ticket 100, £1m</li><li>$L_4$: if ticket 1-89, £0m; if ticket 90-99, £5m; if ticket 100, £0m </li></ul>I know that each ticket has an equal chance of winning --- thus, by the Principal Principle, $c(\mbox{Ticket $n$ wins}) = \frac{1}{100}$. Now, it turns out that many people have preferences recorded in the following desirability function $V$: $$V(L_1) > V(L_2) \mbox{ and } V(L_3) < V(L_4)$$<br /><br />When there is an option that guarantees them a high payout (\pounds 1m), they prefer that over something with 1% chance of nothing (\pounds 0) even if it also provides 10% chance of much greater payout (£5m). On the other hand, when there is no guarantee of a high payout, they prefer the chance of the much greater payout (\pounds 5m), even if there is also a slightly greater chance of nothing (£0). The problem is that there is no way to assign values to $V(£0)$, $V(£1m)$, and $V(£5m)$ so that $V$ satisfies Linearity and also these inequalities. Suppose, for a reductio, that there is. By Linearity,<br />$$V(L_1) = 0.89V(£1\mathrm{m}) + 0.1 V(£1\mathrm{m}) + 0.01 V(£1\mathrm{m})$$<br />$$V(L_2) = 0.89V(£1\mathrm{m}) + 0.1 V(£5\mathrm{m}) + 0.01 V(£0\mathrm{m}) $$<br />Then, since $V(L_1) > V(L_2)$, we have: $$0.1 V(£1\mathrm{m}) + 0.01 V(£1\mathrm{m}) > 0.1 V(£5\mathrm{m}) + 0.01 V(£0\mathrm{m})$$ But also by Linearity, $$V(L_3) = 0.89V(£0\mathrm{m}) + 0.1 V(£1\mathrm{m}) + 0.01 V(£1\mathrm{m})$$<br />$$V(L_4) = 0.89V(£0\mathrm{m}) + 0.1 V(£5\mathrm{m}) + 0.01 V(£0\mathrm{m})$$<br />Then, since $V(L_3) < V(L_4)$, we have: $$0.1 V(£1\mathrm{m}) + 0.01 V(£1\mathrm{m}) < 0.1 V(£5\mathrm{m}) + 0.01 V(£0\mathrm{m})$$<br />And this gives a contradiction. In general, an agent violates Linearity when she has any risk averse or risk seeking preferences.<br /><br />Stefánsson and Bradley show that, in the presence of the Principal Principle, Chance Neutrality entails Linearity; and they argue that there are rational violations of Linearity (such as the Allais preferences); so they conclude that there are rational violations of Chance Neutrality. So far, so good for the reliabilist: the Swamping Problem assumes that Chance Neutrality is a requirement of rationality; and we have seen that it is not. However, reliabilism is not out of the woods yet. After all, the veritist's version of reliabilism that in fact assumes Linearity! They say that a belief is justified if it is likely to true. And they say this because a belief that is likely to be true has high expected epistemic value on the veritist's account of epistemic value. And so they connect justification to epistemic value by taking the value of a belief to be its expected epistemic value --- that is, they assume Linearity. Thus, if the only rational violations of Chance Neutrality are also rational violations of Linearity, then the Swamping Problem is revived. In particular, if Linearity entails Chance Neutrality, then reliabilism cannot solve the Swamping Problem.<br /><br />Fortunately, even in the presence of the Principal Principle, Linearity does not entail Chance Neutrality. Together, the Principal Principle and Desirability entail:<br /><br />$V(\mbox{Objective probability of $X$ given I have $g$ is $\alpha$}) =$<br /><br />$\alpha V(X\ \&\ \mbox{Objective probability of $X$ given I have $g$ is $\alpha$}) + $<br /><br />$(1-\alpha) V(\overline{X}\ \&\ \mbox{Objective probability of $X$ given I have $g$ is $\alpha$})$<br /><br />And Linearity entails:<br /><br /> $V(\mbox{Objective probability of $X$ given I have $g$ is $\alpha$}) = \alpha V(X) + (1-\alpha) V(\overline{X})$<br /><br />So<br />$\alpha V(X) + (1-\alpha) V(\overline{X}) =$<br /><br />$\alpha V(X\ \&\ \mbox{Objective probability of $X$ given I have $g$ is $\alpha$}) + $<br /><br />$(1-\alpha) V(\overline{X}\ \&\ \mbox{Objective probability of $X$ given I have $g$ is $\alpha$})$<br /><br />And, whatever the values of $V(X)$ and $V(\overline{X})$, there are values of $$V(X\ \&\ \mbox{Objective probability of $X$ given I have $g$ is $\alpha$})$$ and $$V(\overline{X}\ \&\ \mbox{Objective probability of $X$ given I have $g$ is $\alpha$})$$<br />such that the above equation holds. Thus, it is at least possible to adhere to Linearity, yet violate Chance Neutrality. Of course, this does not show that the agent who adheres to Linearity but violates Chance Neutrality is rational. But, now that the intuitive appeal of Chance Neutrality is undermined, the burden is on those who raise the Swamping Problem to explain why such cases are irrational.<br /><br /><h2>References</h2><br /><ul><li>Allais, M. (1953). Le comportement de l’homme rationnel devant le risque: critique des postulats et axiomes de l'école Amáricaine. Econometrica, 21(4), 503–546.</li><li>Buchak, L. (2013). Risk and Rationality. Oxford University Press.</li><li>Kvanvig, J. (2003). The Value of Knowledge and the Pursuit of Understanding. Cambridge: Cambridge University Press.</li><li>Stefánsson, H. O., & Bradley, R. (2015). How Valuable Are Chances? Philosophy of Science, 82, 602–625.</li><li>Zagzebski, L. (2003). The search for the source of the epistemic good. Metaphilosophy, 34(12-28).</li></ul><br />Richard Pettigrewhttp://www.blogger.com/profile/07828399117450825734noreply@blogger.com0tag:blogger.com,1999:blog-4987609114415205593.post-6713211078660404562017-02-06T23:50:00.005+00:002017-02-08T08:55:29.400+00:00What is justified credence?Aafira and Halim are both 90% confident that it will be sunny tomorrow. Aafira bases her credence on her observation of the weather today and her past experience of the weather on days that follow days like today -- around nine out of ten of them have been sunny. Halim bases his credence on wishful thinking -- he's arranged a garden party for tomorrow and he desperately wants the weather to be pleasant. Aafira, it seems, is justified in her credence, while Halim is not. Just as one of your full or categorical beliefs might be justified if it is based on visual perception under good conditions, or on memories of recent important events, or on testimony from experts, so might one of your credences be; and just as one of your full beliefs might be unjustified if it is based on wishful thinking, or biased stereotypical associations, or testimony from ideologically driven news outlets, so might your credences be. In this post, I'm looking for an account of justified credence -- in particular, I seek necessary and sufficient conditions for a credence to be justified. Our account will be reliabilist. <br /><br />Reliabilism about justified beliefs comes in two varieties: process reliabilism and indicator reliabilism. Roughly, process reliabilism says that a belief is justified if it is formed by a reliable process, while indicator reliabilism says that a belief is justified if it is based on a ground that renders it likely. Reliabilism about justified credence also comes in two varieties; indeed, it comes in the same two varieties. And, indeed, of the two existing proposals, <a href="http://academic.depauw.edu/jeffreydunn_web/" target="_blank">Jeff Dunn</a>'s is a version of process reliabilism (<a href="http://link.springer.com/article/10.1007%2Fs11098-014-0380-2" target="_blank">paper</a>) while <a href="http://profile.nus.edu.sg/fass/phitwh/" target="_blank">Weng Hong Tang</a> offers a version of indicator reliabilism (<a href="https://academic.oup.com/mind/article-abstract/125/497/63/2563643/Reliability-Theories-of-Justified-Credence" target="_blank">paper</a>). As we will see, both face the same objection. If they are right about what justification is, it is mysterious why we care about justification, for neither of the accounts connects justification to a source of epistemic value. We will call this the <i>Connection Problem</i>.<br /><br />I begin by describing Dunn's process reliabilism and Tang's indicator reliabilism. I argue that, understood correctly, they are, in fact, extensionally equivalent. That is, Dunn and Tang reach the top of the same mountain, albeit by different routes. However, I argue that both face the Connection Problem. In response, I offer my own version of reliabilism, which is both process and indicator, and I argue that it solves that problem. Furthermore, I show that it is also extensionally equivalent to Dunn's reliabilism and Tang's.<br /><br /><h2>Reliabilism and Dunn on reliable credence</h2><br />Let us begin with Dunn's process reliabilism for justified credences. Now, to be clear, Dunn takes himself only to be providing an account of reliability for credence-forming processes. He doesn't necessarily endorse the other two conjuncts of reliabilism, which say that a credence is justified if it is reliable, and that a credence is reliable if formed by a reliable process. Instead, Dunn speculates that perhaps being reliably formed is but one of the epistemic virtues, and he wonders whether all of the epistemic virtues are required for justification. Nonetheless, I will consider a version of reliabilism for justified credences that is based on Dunn's account of reliable credence. For reasons that will become clear, I will call this the calibrationist version of process reliabilism for justified credence. Dunn rejects it based on what I will call below the <i>Graining Problem</i>. As we will see, I think we can answer that objection.<br /><br />For Dunn, a credence-forming process is perfectly reliable if it is well calibrated. Here's what it means for a process $\rho$ to be well calibrated:<br /><ul><li>First, we construct a set of all and only the outputs of the process $\rho$ in the actual world and in nearby counterfactual scenarios. An output of $\rho$ consists of a credence $x$ in a proposition $X$ at a particular time $t$ in a particular possible world $w$ -- so we represent it by the tuple $(x, X, w, t)$. If $w$ is a nearby world and $t$ a nearby time, we call $(x, X, w, t)$ a <i>nearby output</i>. Let $O_\rho$ be the set of nearby outputs -- that is, the set of tuples $(x, X, w, t)$, where $w$ is a nearby world, $t$ is a nearby time, and $\rho$ assigns credence $x$ to proposition $X$ in world $w$ at time $t$.</li><li>Second, we say that the truth-ratio of $\rho$ for credence $x$ is the proportion of nearby outputs $(x, X, w, t)$ in $O_\rho$ such that $X$ is true at $w$ and $t$.</li><li>Finally, we say that $\rho$ is well calibrated (or nearly so) if, for each credence $x$ that $\rho$ assigns, $x$ is equal to (or approximately equal to) the truth-ratio of $\rho$ for $x$.</li></ul>For instance, suppose a process only ever assigns credence 0.6 or 0.7. And suppose that, 60% of the time that it assigns 0.6 in the actual world or a nearby world it assigns it to a proposition that is true; and 70% of the time it assigns 0.7 it assigns it to a true proposition. If, on the other hand, 59% of the time that it assigns 0.6 in the actual world or a nearby world it assigns it to a proposition that is true, while 71% of the time it assigns 0.7 it assigns it to a true proposition, then that process is not well calibrated, but it is nearly well calibrated. But if 23% of the time that it assigns 0.6 in the actual world or a nearby world it assigns it to a proposition that is true, while 95% of the time it assigns 0.7 it assigns it to a true proposition, then that process is not even nearly well calibrated.<br /><br />This, then, is Dunn's calibrationist account of the reliability of a credence-forming process. Any version of reliabilism about justified credences that is based on it requires two further ingredients. First, we must use the account to say when an individual credence is reliable; second, we must add the claim that a credence is justified iff it is reliable. Both of these moves creates problems. We will address them below. But first it will be useful to present Tang's version of indicator reliabilism for justified credence. It will provide an important clue that helps us solve one of the problems that Dunn's account faces. And, having it in hand, it will be easier to see how these two accounts end up coinciding.<br /><br /><h2>Tang's indicator reliabilism for justified credence</h2><br />According to indicator reliabilism for justified belief, a belief is justified if the ground on which it is based is a good indicator of the truth of that belief. Thus, beliefs formed on the basis of visual experiences tend to be justified because the fact that the agent had the visual experience in question makes it likely that the belief they based on it is true. Wishful thinking, on the other hand, usually does not give rise to justified belief because the fact that an agent hopes that a particular proposition will be true -- which in this case is the ground of their belief -- does not make it likely that the proposition is true.<br /><br />Tang seeks to extend this account of justified belief to the case of credence. Here is his first attempt at an account:<br /><br /><b>Tang's Indicator Reliabilism for Justified Credence (first pass)</b> A credence of $x$ in $X$ by an agent $S$ is justified iff<br />(TIC1-$\alpha$) $S$ has ground $g$;<br />(TIC2-$\alpha$) the credence $x$ in $X$ by $S$ is based on ground $E$;<br />(TIC3-$\alpha$) the objective probability of $X$ given that the agent has ground $g$ approximates or equals $x$ -- we write this $P(X | \mbox{$S$ has $g$}) \approx x$.<br /><br />Thus, just as an agent's full belief in a proposition is justified if its ground makes the objective probability of that proposition close to 1, a credence $x$ in a proposition is justified if its ground makes the objective probability of that proposition close to $x$. There is a substantial problem here in identifying exactly to which notion of objective probability Tang wishes to appeal. But we will leave that aside for the moment, other than to say that he conceives of it along the lines of hypothetical frequentism -- that is, the objective probability of $X$ given $Y$ is the hypothetical frequency with which propositions like $X$ are true when propositions like $Y$ are true. <br /><br />However, as Tang notes, as stated, his version of indicator reliabilism faces a problem. Suppose I am presented with an empty urn. I watch as it is filled with 100 balls, numbered 1 to 100, half of which are white, and half of which are black. I shake the urn vigorously and extract a ball. It's number 73 and it's white. I look at its colour and the numeral printed on it. I have a visual experience of a white ball with '73' on it. On the basis of my visual experience of the numeral alone, I assign credence 0.5 to the proposition that ball 73 is white. According to Wang's first version of indicator reliabilism for justified credence, my credence is justified. My ground is the visual experience of the number on the ball; I have that ground; I base my credence on that ground; and the objective probability that ball 73 is white given that I have a visual experience of the numeral '73' printed on it is 50% -- after all, half the balls are white. Of course, the problem is that I have not used my total evidence -- or, in the language of grounds, I have not based my belief on my most inclusive ground. I had the visual experience of the numeral on the ball as a ground; but I also had the visual experience of the numeral on the ball <i>and the colour of the ball</i> as a ground. The resulting credence is unjustified because the objective probability that ball 73 is white given I have the more inclusive ground is not 0.5 -- it is close to 1, since my visual system is so reliable. This leads Tang to amend his account of justified credence as follows:<br /><br /><b>Tang's Indicator Reliabilism for Justified Credence</b> A credence of $x$ in $X$ by an agent $S$ is justified iff<br />(TIC1) $S$ has ground $g$;<br />(TIC2) the credence $x$ in $X$ by $S$ is based on ground $g$;<br />(TIC3) the objective probability of $X$ given that the agent has ground $g$ approximates or equals $x$ -- that is, $P(X | \mbox{$S$ has $g$}) \approx x$;<br />(TIC4) there is no more inclusive ground $g'$ such that (i) $S$ has $g'$ and (ii) the objective probability of $X$ given that the agent has ground $g'$ does not equal or approximate $x$ -- that is, $P(X | \mbox{$S$ has $g'$}) \not \approx x$.<br /><br />This, then, is Tang's version of indicator reliabilism for justified credences.<br /><br /><h2>Same mountain, different routes</h2><br />Thus, we have now seen Dunn's process reliabilism and Tang's indicator reliabilism for justified credences. Is either correct? If so, which? In one sense, both are correct; in another, neither is. Less mysteriously: as we will see in this section, Dunn's process reliablism and Tang's indicator reliabilism are extensionally equivalent -- that is, the same credences are justified on both. What's more, as we will see in the final section, both are extensionally equivalent to the correct account of justified credence, which is thus a version of both process and indicator reliabilism. However, while they get the extension right, they do so for the wrong reasons. A justified credence is not justified because it is formed by a well calibrated process; and it is not justified because it matches the objective chance given its grounds. Thus, Dunn and Tang delimit the correct extension, but they use the wrong intension. In the final section of this post, I will offer what I take to be the correct intension. But first, let's see why it is that the routes that Dunn and Tang take lead them both to the top of the same mountain.<br /><br />We begin with Dunn's calibrationist account of the reliability of a credence-forming process. As we noted above, any version of reliabilism about justified credences that is based on this account requires two further ingredients. First, we must use the calibrationist account of reliable credence-forming processes to say when an individual credence is reliable. The natural answer: when it is formed by a reliable credence-forming process. But then we must be able to identify, for a given credence, the process of which it is an output. The problem is that, for any credence, there are a great many processes of which it might be the output. I have a visual experience of a piece of red cloth on my desk, and I form a high credence that there is a piece of red cloth on my desk. Is this credence the output of a process that assigns a high credence that that there is a piece of red cloth on my desk whenever I have that visual experience? Or is it the output of a process that assigns a high credence that there is a piece of red cloth on my desk whenever I have that visual experience <i>and the lighting conditions in my office are good</i>, while it assigns a middling credence that there is a piece of red cloth on my desk whenever I have that visual experience <i>and the lighting conditions in my office are bad</i>? It is easy to see that this is important. The first process is poorly calibrated, and thus unreliable on Dunn's account; the second process is better calibrated and thus more reliable on Dunn's account. This is the so-called <i>Generality Problem</i>, and it is a challenge that faces any version of reliabilism. I will offer a version of Juan Comesaña's solution to this problem below -- as we will see, that solution also clears the way for a natural solution to the Graining Problem, which we consider next.<br /><br />Dunn provides an account of when a credence-forming process is reliable. And, once we have a solution to the Generality Problem, we can use that to say when a credence is reliable -- it is reliable when formed by a reliable credence-forming process. Finally, to complete the version of process reliablism about justified credence that we are basing on Dunn's account, we just need the claim that a credence is justified iff it is reliable. But this too faces a problem, which we call the <i>Graining Problem</i>. As we did above, suppose I am presented with an empty urn. I watch as it is filled with 100 balls, numbered 1 to 100, half of which are white, and half of which are black. I shake the urn vigorously and extract a ball. I look at its colour and the numeral printed on it. I have two processes at my disposal. Process 1 takes my visual experience of the numeral only, say '$n$', and assigns the credence 0.5 to the proposition that ball $n$ is white. Process 2 takes my visual experience of the numeral, '$n$', <i>and my visual experience of the colour of the ball</i>, and assigns credence 1 to the proposition that ball $n$ is white if my visual experience is of a white ball, and assigns credence 1 to the proposition that ball $n$ is black if my visual experience is of a black ball. Note that both processes are well calibrated (or nearly so, if we allow that my visual system is very slightly fallible). But we would usually judge the credence formed by the second to be better justified than the credence formed by the first. Indeed, we would typically say that a Process 1 credence is unjustified, while a Process 2 credence is justified. Thus, being formed by a well calibrated or nearly well calibrated process is not sufficient for justification. And, if reliability is calibration, then reliability is not justification and reliabilism fails. It is this problem that leads Dunn to reject reliabilism about justified credence. However, as we will see below, I think he is a little hasty.<br /><br />Let us consider the Generality Problem first. To this problem, <a href="http://comesana.arizona.edu/" target="_blank">Juan Comesaña</a> offers the following solution (<a href="http://link.springer.com/article/10.1007%2Fs11098-005-3020-z" target="_blank">paper</a>). Every account of doxastic justification -- that is, every account of when a given doxastic attitude of a particular agent is justified for that agent -- must recognize that two agents may have the same doxastic attitude and the same evidence while the doxastic attitude of one is justified and the doxastic attitude of the other is not, because their doxastic attitudes are not based on the same evidence. The first might base her belief on the total evidence, for instance, whilst the second ignores that evidence and bases his belief purely on wishful thinking. Thus, Comesaña claims, every theory of justification needs a notion of the grounds or the basis of a doxastic attitude. But, once we have that, a solution to the Generality Problem is very close. Comesaña spells out the solution for process reliabilism about full beliefs:<br /><br /><b>Well-Founded Process Reliablism for Justified Full Beliefs</b> A belief that $X$ by an agent $S$ is justified iff<br />(WPB1) $S$ has ground $g$;<br />(WPB2) the belief that $X$ by $S$ is based on ground $g$;<br />(WPB3) the process <i>producing a belief state $X$ based on ground $g$</i> is a reliable process.<br /><br />This is easily adapted to the credal case:<br /><br /><b>Well-Founded Process Reliablism for Justified Credences</b> A credence of $x$ in $X$ by an agent $S$ is justified iff<br />(WPC1) $S$ has ground $g$;<br />(WPC2) the credence $x$ in $X$ by $S$ is based on ground $g$;<br />(WPC3) the process <i>producing a credence of $x$ in $X$ based on ground $g$</i> is a reliable process.<br /><br />Let us now try to apply Comesaña's solution to the Generality Problem to help Dunn's calibrationist reliabilism about justified credences. Recall: according to Dunn, a process $\rho$ is reliable if it is well calibrated (or nearly so). Consider the process <i>producing a credence of $x$ in $X$ based on ground $g$</i> -- for convenience, we'll write it $\rho^g_{X,x}$. There is only one credence that it assigns, namely $x$. So it is well calibrated if that truth-ratio of $\rho^g_{X,x}$ for $x$ is equal to $x$. Now, $O_{\rho^g_{X,x}}$ is the set of tuples $(X, x, w, t)$ where $w$ is a nearby world and $t$ a nearby time where $\rho^g_{X,x}$ assigns credence $x$ to proposition $X$. But, by the definition of $\rho^g_{X,x}$, those are the nearby worlds and nearby times at which the agent has the ground $g$. Thus, the truth-ratio of $\rho^g_{X,x}$ for $x$ is the proportion of those nearby worlds and times at which the agent has the ground $g$ at which $X$ is true. And that, it seems to me, is the something like the objective probability of $X$ conditional on the agent having ground $g$, at least given the hypothetical frequentist account of objective probability of the sort that Tang favours. As above, we denote the objective probability of $X$ conditional on the agent $S$ having grounds $g$ as follows: $P(X | \mbox{$S$ has $g$})$. Thus, $P(X | \mbox{$S$ has $g$})$ is the truth-ratio of $\rho^g_{p,x}$ for $x$. And thus, a credence $x$ in $X$ based on ground $g$ is reliable iff $x$ is close to $P(X | \mbox{$S$ has $g$})$. That is,<br /><br /><b>Well-Founded Calibrationist Process Reliabilism for Justified Credences (first attempt)</b> A credence of $x$ in $X$ by an agent $S$ is justified iff<br />(WCPC1) $S$ has ground $g$;<br />(WCPC2) the credence $x$ in $X$ by $S$ is based on ground $g$;<br />(WCPC3) the process <i>producing a credence of $x$ in $X$ based on ground $g$</i> is a (nearly) well calibrated process -- that is, $P(X | \mbox{$S$ has $g$}) \approx x$. <br /><br />But now compare Well-Founded Calibrationist Process Reliabilism, based on Dunn's account of reliable processes and Comesaña's solution to the Generality Problem, with Tang's first attempt at Indicator Reliabilism. Consider the necessary and sufficient conditions that each imposes for justification: TIC1 = WCPC1; TIC2 = WCPC2; TIC3 = WCPC3. Thus, these are the same account. However, as we saw above, Tang's first attempt to formulate indicator reliabilism for justified credence fails because it counts as justified a credence that is not based on an agent's total evidence; and we also saw that, once the Generality Problem is solved for Dunn's calibrationist process reliabilism, it faces a similar problem, namely, the Graining Problem from above. Tang amends his version of indicator reliabilism by adding the fourth condition TIC4 from above. Might we amend Dunn's calibrationist process reliabilism is a similar way?<br /><br /><b>Well-Founded Calibrationist Process Reliabilism for Justified Credences</b> A credence of $x$ in $X$ by an agent $S$ is justified iff<br />(WCPC1) $S$ has ground $g$;<br />(WCPC2) the credence $x$ in $X$ by $S$ is based on ground $g$;<br />(WCPC3) the process <i>producing a credence of $x$ in $X$ based on ground $g$</i> is a (nearly) well calibrated process -- that is, $P(X | \mbox{$S$ has $g$}) \approx x$;<br />(WCPC4) there is no more inclusive ground $g'$ and credence $x' \not \approx x$, such that the process <i>producing a credence of $x'$ in $X$ based on ground $g'$</i> is a (nearly) well calibrated process -- that is, $P(X | \mbox{$S$ has $g'$}) \approx x'$.<br /><br />Since TIC4 is equivalent to WCPC4, this final version of process reliabilism for justified credences is equivalent to Tang's final version of his indicator reliabilism for justified credences. Thus, Dunn and Tang have reached the top of the same mountain, albeit by different routes<br /> <br /><h2>The third route up the mountain</h2><br />Once we have addressed certain problems with the calibrationist version of process reliabilism for justified credence, we see that it agrees with the current best version of indicator reliabilism. This gives us a little hope that both have hit upon the correct account of justification. In the end, I will conclude that both have indeed hit upon the correct <i>extension</i> of the concept of justified credence. But that have done so for the wrong reasons, for they have not hit upon the correct <i>intension</i>.<br /><br />There are two sorts of route you might take when pursuing an account of justification for a given sort of doxastic attitude, such as a credence or a full belief. You might look to intuitions concerning particular cases and try to discern a set of necessary and sufficient conditions that sort these cases in the same way that your intuitions do; or, you might begin with an account of epistemic value, assume that justification must be linked in some natural way to the promotion of epistemic value, and then provide an account of justification that vindicates that assumption. Dunn and Tang have each taken a route of the first sort; I will follow a route of the second sort.<br /><br />I will adopt the veritist's account of epistemic value. That is, I take accuracy to be the sole fundamental source of epistemic value for a credence, where a credence in a true proposition is more accurate the higher it is; a credence in a false proposition is more accurate the lower it is. Given this account of epistemic value, what is the natural account of justification? Well, at first sight, there are two: one is process reliabilist; the other is indicator reliabilist. But, in a twist that should come as little surprise given the conclusions of the previous section, it will turn out that these two accounts coincide, and indeed coincide with the final versions of Dunn's and Tang's accounts that we reached above. Thus, I too will reach the top of the same mountain, but by yet another route.<br /><br /><h3>Epistemic value version of indicator reliabilism</h3><br />In the case of full beliefs, indicator reliabilism says this: a belief in $X$ by $S$ on the basis of grounds $g$ is justified iff the objective probability of $X$ given that $S$ has grounds $g$ is high --- that is, close to 1. Tang generalises this to the case of credence, but I think he generalises in the wrong direction; that is, he takes the wrong feature to be salient and uses that to formulate his indicator reliabilism for justified credence. He takes the general form of indicator reliabilism to be something like this: a doxastic attitude $s$ towards $X$ by $S$ on the basis of grounds $g$ is justified iff the attitude $s$ 'matches' the objective probability of $X$ given that $S$ has grounds $g$. And he takes the categorical attitude of belief in $X$ to 'match' high objective probability of $X$, and credence $x$ in $X$ to 'match' objective probability of $x$ that $X$. The problem with this account is that it leaves mysterious why justification is valuable. Unless we say that matching objective probabilities is somehow epistemic valuable in itself, it isn't clear why we should want to have justified doxastic attitudes in this sense.<br /><br />I contend instead that the general form of indicator reliabilism is this:<br /><br /><b>Indicator reliabilism for justified doxastic attitude (epistemic value version)</b> Doxastic attitude $s$ towards proposition $X$ by agent $S$ is justified iff<br />(EIA1) $S$ has $g$;<br />(EIA2) $s$ in $X$ by $S$ is based on $g$;<br />(EIA3) if $g' \subseteq g$ is a ground that $S$ has, then for every doxastic attitude $s'$ of the same sort as $s$, the expected epistemic value of attitude $s'$ towards $X$ given that $S$ has $g'$ is at most (or not much above) the expected epistemic value of attitude $s$ towards $X$ given that $S$ has $g'$.<br /><br />Thus, attitude $s$ towards $X$ by $S$ is justified if $s$ is based on a ground $g$ that $S$ has, and $s$ is the attitude towards $X$ that has highest expected accuracy relative to the most inclusive grounds that $S$ has.<br /><br />Let's consider this in the full belief case. We have:<br /><br /><b>Indicator reliabilism for justified belief (epistemic value version)</b> A belief in proposition $X$ by agent $S$ is justified iff<br />(EIB1) $S$ has $g$;<br />(EIB2) $s$ in $X$ by $S$ is based on $g$;<br />(EIB3) if $g' \subseteq g$ is a ground that $S$ has, then<br /><ol><li>the expected epistemic value of <i>disbelief</i> in $X$, given that $S$ has $g'$, is at most (or not much above) the expected epistemic value of <i>belief</i> in $X$, given that $S$ has $g'$;</li><li>the expected epistemic value of <i>suspension</i> in $X$, given that $S$ has $g'$, is at most (or not much above) the expected epistemic value of <i>belief</i> in $X$, given that $S$ has $g'$.</li></ol><br />To complete this, we need only an account of epistemic value. Here, the veritist's account of epistemic value runs as follows. There are three categorical doxastic attitudes towards a given proposition: belief, disbelief, and suspension of judgment. If the proposition is true, belief has greatest epistemic value, then suspension of judgment, then disbelief. If it is false, the order is reversed. It is natural to say that a belief in a truth and disbelief in a falsehood have the same high epistemic value -- following <a href="http://www.kennyeaswaran.org/" target="_blank">Kenny Easwaran</a> (<a href="http://onlinelibrary.wiley.com/doi/10.1111/nous.12099/full" target="_blank">paper</a>), we denote this $R$ (for `getting it Right'), and assume $R >0$. And it is natural to say that a disbelief in a truth and belief in a falsehood have the same low epistemic value -- again following Easwaran, we denote this $-W$ (for `getting it Wrong'), and assume $W > 0$. And finally it is natural to say that suspension of belief in a truth has the same epistemic value as suspension of belief in a falsehood, and both have epistemic value 0. We assume that $W > R$, just as Easwaran does. Now, suppose proposition $X$ has objective probability $p$. Then the expected epistemic utility of different categorical doxastic attitudes towards $X$ is given below:<br /><ul><li>Expected epistemic value of belief in $X$ = $p\cdot R + (1-p)\cdot(-W)$.</li><li>Expected epistemic value of suspension in $X$ = $p\cdot 0 + (1-p)\cdot 0$.</li><li>Expected epistemic value of disbelief in $X$ = $p\cdot (-W) + (1-p)\cdot R$. </li></ul>Thus, belief in $X$ has greatest epistemic value amongst the possible categorical doxastic attitudes to $X$ if $p > \frac{W}{R+W}$; disbelief in $X$ has greatest epistemic value if $p < \frac{R}{R+W}$; and suspension in $X$ has greatest value if $\frac{R}{R+W} < p < \frac{W}{R+W}$ (at $p = \frac{W}{R+W}$, belief ties with suspension; at $p = \frac{R}{R+W}$, disbelief ties with suspension). With this in hand, we have the following version of indicator reliabilism for justified beliefs:<br /><br /><b>Indicator reliabilism for justified belief (veritist version)</b> A belief in $X$ by agent $S$ is justified iff<br />(EIB1$^*$) $S$ has $g$;<br />(EIB2$^*$) the belief in $X$ by $S$ is based on $g$;<br />(EIB3$^*$) the objective probability of $X$ given that $S$ has $g$ is (nearly) greater than $\frac{W}{R+W}$;<br />(EIB4$^*$) there is no more inclusive ground $g'$ such that (a) $S$ has $g'$ and (b) the objective probability of $X$ given that $S$ has $g'$ is <i>not</i> (nearly) greater than $\frac{W}{R+W}$.<br /><br />And of course this is simply a more explicit version of the standard version of indicator reliabilism. It is more explicit because it gives a particular threshold above which the objective probability of $X$ given that $S$ has $g$ counts as 'high', and above which (or not much below which) the belief in $X$ by $S$ counts as justified --- that threshold is $\frac{W}{R+W}$.<br /><br />Note that this epistemic value version of indicator reliabilism for justified doxastic states also gives a straightforward account of when a suspension of judgment is justified. Simply replace (EIB3$^*$) and (EIB4$^*$) with:<br /><br />(EIS3$^*$) the objective probability of $X$ given that $S$ has $g$ is (nearly) between $\frac{W}{R+W}$ and $\frac{R}{R+W}$;<br />(EIS4$^*$) there is no more inclusive ground $g'$ such that (a) $S$ has $g'$ and (b) the objective probability of $X$ given that $S$ has $g'$ is <i>not</i> (nearly) between $\frac{W}{R+W}$ and $\frac{R}{R+W}$.<br /><br />And when a disbelief is justified. This time, replace (EIB3$^*$) and (EIB4$^*$) with:<br /><br />(EID3$^*$) the objective probability of $X$ given that $S$ has $g$ is (nearly) less than $\frac{R}{R+W}$;<br />(EID4$^*$) there is no more inclusive ground $g'$ such that (a) $S$ has $g'$ and (b) the objective probability of $X$ given that $S$ has $g'$ is <i>not</i> (nearly) less than $\frac{R}{R+W}$.<br /><br />Next, let's turn to indicator reliabilism for justified credence. Here's the epistemic value version:<br /><br /><b>Indicator reliabilism for justified credence (epistemic value version)</b> A credence of $x$ in proposition $X$ by agent $S$ is justified iff<br />(EIC1) $S$ has $g$;<br />(EIC2) credence $x$ in $X$ by $S$ is based on $g$;<br />(EIC3) if $g' \subseteq g$ is a ground that $S$ has, then for every credence $x'$, the expected epistemic value of credence $x'$ in $X$ given that $S$ has $g'$ is at most (or not much above) the expected epistemic value of credence $x$ in $X$ given that $S$ has $g'$.<br /><br />Again, to complete this, we need an account of epistemic value for credences. As noted above, the veritist holds that the sole fundamental source of epistemic value for credences is their accuracy. There is a lot to be said about different potential measures of the accuracy of a credence -- see, for instance, <a href="http://www-personal.umich.edu/~jjoyce/" target="_blank">Jim Joyce</a>'s 2009 paper <a href="http://link.springer.com/chapter/10.1007%2F978-1-4020-9198-8_11" target="_blank">'Accuracy and Coherence'</a>, chapters 3 & 4 of <a href="https://richardpettigrew.wordpress.com/" target="_blank">my</a> 2016 book <a href="https://global.oup.com/academic/product/accuracy-and-the-laws-of-credence-9780198732716?cc=gb&lang=en&" target="_blank"><i>Accuracy and the Laws of Credence</i></a>, or <a href="http://www.levinstein.org/" target="_blank">Ben Levinstein</a>'s forthcoming paper <a href="https://www.dropbox.com/s/7dga9rlxertbz5g/Schervish%20draft%202.0.pdf?dl=0" target="_blank">'A Pragmatist's Guide to Epistemic Utility'</a>. But here I will say only this: we assume that those measures are <i>continuous</i> and <i>strictly proper</i>. That is, we assume: (i) we assume that the accuracy of a credence is a continuous function of that credence; and (ii) any probability $x$ in a proposition $X$ expects credence $x$ to be more accurate than it expects any other credence $x' \neq x$ in $X$ to be. These two assumptions are widespread in the literature on accuracy-first epistemology, and they are required for many of the central arguments in that area. Given veritism and the continuity and strict propriety of the accuracy measures, (EIC3) is provably equivalent to the conjunction of:<br /><br />(EIC3$^*$) the objective probability of $X$ given that the agent has ground $g$ approximates or equals $x$ -- that is, $P(X | \mbox{$S$ has $g$}) \approx x$;<br />(EIC4$^*$) there is no more inclusive ground $g'$ such that (i) $S$ has $g'$ and (ii) the objective probability of $X$ given that the agent has ground $g'$ does not equal or approximate $x$ -- that is, $P(X | \mbox{$S$ has $g'$}) \not \approx x$.<br /><br />But of course EIC3 = TIC3 and EIC4 = TIC4 from above. Thus, the veritist version of indicator reliabilism for justified credences is equivalent to Tang's indicator reliabilism, and thus to the calibrationist version of process reliabilism. <br /><br /><h2>Epistemic value version of process reliabilism</h2><br />Next, let's turn to process reliabilism. How might we give an epistemic value version of that? The mistake made by the calibrationist version of process reliabilism is of the same sort as the mistake made by Tang in his formulation of indicator reliabilism -- both generalise from the case of full beliefs in the wrong way by mistaking an accidental feature for the salient feature. For the calibrationist, a full belief is justified if it is formed by a reliable process, and a process is reliable if a high proportion of the beliefs it produces are true. Now, notice that there is a sense in which such a process is calibrated: a belief is associated with a high degree of confidence, and that matches, at least approximately, the high truth-ratio of the process. In fact, we want to say that this process is belief-reliable. For it is possible for a process to be reliable in its formation of beliefs, but not in its formation of disbeliefs. So a process is disbelief-reliable if a high proportion of the disbeliefs it produces are false. And we might say that a process is suspension-reliable if a middling proportion the suspensions it forms are true and a middling proportion are false. In each case, we think that, corresponding to each sort of categorical doxastic attitude $s$, there is a fitting proportion $x$ such that a process is $s$-reliable if $x$ is (approximately) the proportion of truths amongst the propositions to which it assigns $s$. Applying this in the credal case gives us the calibrationist version of process reliabilism that we have already met -- a credence $x$ in $S$ is justified if it is formed by a process whose truth-ratio for a given credence is equal to that credence. However, being the product of a belief-reliable process is not the feature of a belief in virtue of which it is justified. Rather, a belief is justified if it is the product of a process that has high expected epistemic value.<br /><br /><b>Process reliabilism for justified doxastic attitude (epistemic value version)</b> Doxastic attitude $s$ towards proposition $X$ by agent $S$ is justified iff<br />(EPA1-$\beta$) $s$ is produced by a process $\rho$;<br />(EPA2-$\beta$) If $\rho'$ is a process that is available to $S$, then the expected epistemic value of $\rho'$ is at most (or not much more than) the expected epistemic value of $\rho$.<br /><br />That is, a doxastic attitude is justified for an agent if it is the output of a process that maximizes or nearly maximizes expected epistemic value amongst all processes that are available to her. To complete this account, we must say which processes count as available to an agent. To answer this, recall Comesaña's solution to the Generality Problem. On this solution, the only processes that interest us have the form, <i>process producing doxastic attitude $s$ towards $X$ on basis of ground $g$</i>. Clearly, a process of this form is available to an agent exactly when the agent has ground $g$. This gives<br /><br /><b>Process Reliabilism about Justified Doxastic Attitudes (Epistemic value version)</b> Attitude $s$ towards proposition $X$ by $S$ is justified iff<br />(EPA1-$\alpha$) $s$ is produced by process $\rho^g_{s, X}$;<br />(EPA2-$\alpha$) If $g' \subseteq g$ is a ground that $S$ has, then for every doxastic attitude $s'$, the expected epistemic value of process $\rho^{g'}_{s', X}$ is at most (or not much more than) the expected epistemic value of process $\rho^{g}_{s, X}$.<br /><br />Thus, in the case of full beliefs, we have:<br /><br /><b>Process reliabilism for justified belief (epistemic value version)</b> A belief in proposition $X$ by agent $S$ is justified iff<br />(EPB1) Belief in $X$ is produced by process $\rho^g_{\mathrm{bel}, X}$;<br />(EPB2) if $g' \subseteq g$ is a ground that $S$ has, then<br /><ol><li>the expected epistemic value of process $\rho^g_{\mathrm{dis}, X}$ is at most (or not much more than) the expected epistemic value of process $\rho^g_{\mathrm{bel}, X}$;</li><li>the expected epistemic value of process $\rho^g_{\mathrm{sus}, X}$ is at most (or not much more than) the expected epistemic value of process $\rho^g_{\mathrm{bel}, X}$;</li></ol><br />And it is easy to see that (EPB1) = (EIB1) + (EIB2), since belief in $X$ is produced by process $\rho^g_{\mathrm{bel}, X}$ iff $S$ has ground $g$ and a belief in $X$ by $S$ is based on $g$. Also, (EPB2) is equivalent to (EIB3). Thus, as for the epistemic version of indicator reliabilism, we get:<br /><br /><b>Indicator reliabilism for justified belief (veritist version)</b> A belief in $X$ by agent $S$ is justified iff<br />(EPB1) $S$ has $g$;<br />(EPB2) the belief in $X$ by $S$ is based on $g$;<br />(EPB3) the objective probability of $X$ given that $S$ has $g$ is (nearly) greater than $\frac{W}{R+W}$;<br />(EPB4) there is no more inclusive ground $g'$ such that (a) $S$ has $g'$ and (b) the objective probability of $X$ given that $S$ has $g'$ is <i>not</i> (nearly) greater than $\frac{W}{R+W}$.<br /><br />Next, consider how the epistemic value version of process reliabilism applies to credences.<br /><br /><b>Process reliabilism for justified credence (epistemic value version)</b> A credence of $x$ in proposition $X$ by agent $S$ is justified iff<br />(EPC1) the credence in $x$ is produced by process $\rho^g_{x, X}$;<br />(EPC2) if $g' \subseteq g$ is a ground that $S$ and $x'$ is a credence, then the expected epistemic value of process $\rho^{g'}_{x', X}$ is at most (or not much more than) the expected epistemic value of process $\rho^g_{x, X}$.<br /><br />As before, we see that (EPC1) is equivalent to (EIC1) + (EIC2). And, providing the measure of accuracy is strictly proper and continuous, we get that (EPC2) is equivalent to (EIC3). So, once again, we arrive at the same summit. The routes taken by Tang, Dunn, and the epistemic value versions of process and indicator reliabilism lead to the same spot, namely, the following account of justified credence:<br /><br /><b>Reliabilism for justified credence (epistemic value version)</b> A credence of $x$ in proposition $X$ by agent $S$ is justified iff<br />(ERC1) $S$ has $g$;<br />(ERC2) credence $x$ in $X$ by $S$ is based on $g$;<br />(ERC3) the objective probability of $X$ given that the agent has ground $g$ approximates or equals $x$ -- that is, $P(X | \mbox{$S$ has $g$}) \approx x$;<br />(ERC4) there is no more inclusive ground $g'$ such that (i) $S$ has $g'$ and (ii) the objective probability of $X$ given that the agent has ground $g'$ does not equal or approximate $x$ -- that is, $P(X | \mbox{$S$ has $g'$}) \not \approx x$.<br /><br /><br />Richard Pettigrewhttp://www.blogger.com/profile/07828399117450825734noreply@blogger.com0tag:blogger.com,1999:blog-4987609114415205593.post-76013994225603478572017-01-31T15:09:00.002+00:002017-01-31T15:09:27.280+00:00Fifth Reasoning Club Conference @ Turin EXTENDED DEADLINE<div style="background-color: white; border: 0px; color: #1c2024; font-family: Roboto, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 14px; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;">The <span style="border: 0px; font-family: inherit; font-size: inherit; font-style: inherit; font-weight: 700; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">Fifth Reasoning Club Conference</span> will take place at the Center for Logic, Language, and Cognition in Turin on May 18-19, 2017.</div><div style="background-color: white; border: 0px; color: #1c2024; font-family: Roboto, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 14px; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;"><br /></div><div style="background-color: white; border: 0px; color: #1c2024; font-family: Roboto, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 14px; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;">Keynote speakers:</div><div style="background-color: white; border: 0px; color: #1c2024; font-family: Roboto, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 14px; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;"><br /></div><div style="background-color: white; border: 0px; color: #1c2024; font-family: Roboto, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 14px; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;"><a href="http://fitelson.org/" style="border: 0px; color: #2c67d1; font-family: inherit; font-size: inherit; font-style: inherit; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration: none; vertical-align: baseline;" target="_blank">Branden FITELSON</a> (Northeastern University, Boston)</div><div style="background-color: white; border: 0px; color: #1c2024; font-family: Roboto, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 14px; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;"><a href="http://www.rug.nl/staff/jeanne.peijnenburg/research" style="border: 0px; color: #2c67d1; font-family: inherit; font-size: inherit; font-style: inherit; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration: none; vertical-align: baseline;" target="_blank">Jeanne PEIJNENBURG</a> (University of Groningen)</div><div style="background-color: white; border: 0px; color: #1c2024; font-family: Roboto, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 14px; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;"><a href="https://www5.unitn.it/People/en/Web/Persona/PER0003393#INFO" style="border: 0px; color: #2c67d1; font-family: inherit; font-size: inherit; font-style: inherit; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration: none; vertical-align: baseline;" target="_blank">Katya TENTORI</a> (University of Trento)</div><div style="background-color: white; border: 0px; color: #1c2024; font-family: Roboto, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 14px; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;"><a href="http://paulegre.free.fr/" style="border: 0px; color: #2c67d1; font-family: inherit; font-size: inherit; font-style: inherit; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration: none; vertical-align: baseline;" target="_blank">Paul EGRÉ</a> (Institut Jean Nicod, Paris)</div><div style="background-color: white; border: 0px; color: #1c2024; font-family: Roboto, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 14px; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;"><br /></div><div style="background-color: white; border: 0px; color: #1c2024; font-family: Roboto, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 14px; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;">Organizing committee: Gustavo Cevolani (Turin), Vincenzo Crupi (Turin), Jason Konek (Kent), and Paolo Maffezioli (Turin).</div><div style="background-color: white; border: 0px; color: #1c2024; font-family: Roboto, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 14px; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;"><br /></div><div style="background-color: white; border: 0px; color: #1c2024; font-family: Roboto, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 14px; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;"> </div><div style="background-color: white; border: 0px; color: #1c2024; font-family: Roboto, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 14px; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="border: 0px; font-family: inherit; font-size: inherit; font-style: inherit; font-weight: 700; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">CALL FOR ABSTRACTS</span></div><div style="background-color: white; border: 0px; color: #1c2024; font-family: Roboto, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 14px; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;"><br /></div><div style="background-color: white; border: 0px; color: #1c2024; font-family: Roboto, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 14px; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;">The submission deadline for the Fifth Reasoning Club Conference has been <span style="border: 0px; font-family: inherit; font-size: inherit; font-style: inherit; font-weight: 700; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; vertical-align: baseline;">EXTENDED to 15 February 2017</span>. The final decision on submissions will be made by 15 March 2017.</div><div style="background-color: white; border: 0px; color: #1c2024; font-family: Roboto, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 14px; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;"><br /></div><div style="background-color: white; border: 0px; color: #1c2024; font-family: Roboto, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 14px; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;">All PhD candidates and early career researchers with interests in reasoning and inference, broadly construed, are encouraged to submit an abstract of up to 500 words (prepared for blind review) via Easy Chair at <a href="https://easychair.org/conferences/?conf=rcc17" style="border: 0px; color: #2c67d1; font-family: inherit; font-size: inherit; font-style: inherit; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration: none; vertical-align: baseline;" target="_blank">https://easychair.org/conferences/?conf=rcc17</a>. We especially welcome members of groups that are underrepresented in philosophy to submit. We are committed to promoting diversity in our final programme.</div><div style="background-color: white; border: 0px; color: #1c2024; font-family: Roboto, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 14px; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;"><br /></div><div style="background-color: white; border: 0px; color: #1c2024; font-family: Roboto, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 14px; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;">Grants will be available to help cover travel costs for contributed speakers. To apply for a travel grant, please send a CV and a short travel budget estimate in a single pdf file to <a class="mailto" href="mailto:reasoningclubconference2017@gmail.com" style="border: 0px; color: #2c67d1; font-family: inherit; font-size: inherit; font-style: inherit; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration: none; vertical-align: baseline;" target="_blank">reasoningclubconference2017@gmail.com</a>.</div><div style="background-color: white; border: 0px; color: #1c2024; font-family: Roboto, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 14px; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;"><br /></div><div style="background-color: white; border: 0px; color: #1c2024; font-family: Roboto, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 14px; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;">More information is available at <a href="http://www.llc.unito.it/notizie/reasoning-club-2017-llc-call-papers-now-open" target="_blank">http://www.llc.unito.it/notizie/reasoning-club-2017-llc-call-papers-now-open</a>. For any queries please contact Vincenzo Crupi (<a class="mailto" href="mailto:vincenzo.crupi@unito.it" style="border: 0px; color: #2c67d1; font-family: inherit; font-size: inherit; font-style: inherit; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration: none; vertical-align: baseline;" target="_blank">vincenzo.crupi@unito.it</a>) or Jason Konek (<a href="mailto:J.Konek@kent.ac.uk">J.Konek@kent.ac.uk</a>).</div><div style="background-color: white; border: 0px; color: #1c2024; font-family: Roboto, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 14px; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;"><br /></div><div style="background-color: white; border: 0px; color: #1c2024; font-family: Roboto, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 14px; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;">The <a href="https://www.kent.ac.uk/secl/researchcentres/reasoning/club/index.html" style="border: 0px; color: #2c67d1; font-family: inherit; font-size: inherit; font-style: inherit; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration: none; vertical-align: baseline;" target="_blank">Reasoning Club</a> is a network of institutes, centres, departments, and groups addressing research topics connected to reasoning, inference, and methodology broadly construed. It issues the monthly gazette <a href="http://blogs.kent.ac.uk/thereasoner/about/" style="border: 0px; color: #2c67d1; font-family: inherit; font-size: inherit; font-style: inherit; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration: none; vertical-align: baseline;" target="_blank">The Reasoner</a>.</div><div style="background-color: white; border: 0px; color: #1c2024; font-family: Roboto, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 14px; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;"><br /></div><div style="background-color: white; border: 0px; color: #1c2024; font-family: Roboto, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 14px; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;">Earlier editions of the meeting were held in <a href="http://www.vub.ac.be/CLWF/RC2012/" style="border: 0px; color: #2c67d1; font-family: inherit; font-size: inherit; font-style: inherit; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration: none; vertical-align: baseline;" target="_blank">Brussels</a>, <a href="http://reasoningclubpisa.weebly.com/" style="border: 0px; color: #2c67d1; font-family: inherit; font-size: inherit; font-style: inherit; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration: none; vertical-align: baseline;" target="_blank">Pisa</a>, <a href="https://reasoningclubkent.wordpress.com/" style="border: 0px; color: #2c67d1; font-family: inherit; font-size: inherit; font-style: inherit; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration: none; vertical-align: baseline;" target="_blank">Kent</a>, and <a href="http://www.maths.manchester.ac.uk/news-and-events/events/fourth-reasoning-club-conf/" style="border: 0px; color: #2c67d1; font-family: inherit; font-size: inherit; font-style: inherit; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; text-decoration: none; vertical-align: baseline;" target="_blank">Manchester</a>. </div>Jason Konekhttp://www.blogger.com/profile/01750769966011528630noreply@blogger.com0tag:blogger.com,1999:blog-4987609114415205593.post-10905532510983655542017-01-21T08:09:00.000+00:002017-03-27T17:09:10.679+01:00More on the Principal Principle and the Principle of IndifferenceLast week, I <a href="http://m-phi.blogspot.co.uk/2017/01/the-principal-principle-does-not-imply.html" target="_blank">posted</a> about a recent paper by <a href="http://james-hawthorne.oucreate.com/" target="_blank">James Hawthorne</a>, <a href="https://jlandes.wordpress.com/" target="_blank">Jürgen Landes</a>, <a href="https://kent.academia.edu/ChristianWallmann" target="_blank">Christian Wallmann</a>, and <a href="http://blogs.kent.ac.uk/jonw/" target="_blank">Jon Williamson</a> called <a href="http://bjps.oxfordjournals.org/content/early/2015/07/13/bjps.axv030.abstract" target="_blank">'The Principal Principle implies the Principle of Indifference'</a>, which was published in the <i>British Journal for the Philosophy of Science </i>in 2015. In that post, I read the HLWW paper a particular way. I took their argument to run roughly as follows:<br /><br /><i>The Principal Principle, as Lewis stated it, includes an admissibility condition. Any adequate account of admissibility should entail Conditions 1 and 2 (see below). Together with Conditions 1 and 2, the Principal Principle entails the Principle of Indifference. Thus, the Principal Principle entails the Principle of Indifference.</i><br /><br />Read like this, my response to the argument ran thus:<br /><br /><i>There is an account of admissibility -- namely, Levi-admissibility -- that is adequate and on which Condition 2 is not generally true. Levi-admissibility is adequate since has all of the features that Lewis required of admissibility, and it is very natural when we consider a close relative of Lewis' Principal Principle, namely, Levi's Principal Principle, which follows from Lewis' Principal Principle given some natural assumptions about admissibility that Lewis accepts.</i><br /><br />However, there is another reading of the HLWW argument, and indeed it seems that some of H, L, W, and W favour it. On this alternative reading, it is not assumed that Conditions 1 and 2 follow from any adequate account of admissibility. Rather Conditions 1 and 2 are not taken to be consequences of the Principal Principle at all. Rather, they are intended to be plausible further constraints on credences that are independent of the Principal Principle. Thus, on this reading, the conclusion of the HLWW is not that the Principal Principle implies the Principle of Indifference. Rather, it is that the Principal Principle, together with two further norms (namely, Conditions 1 and 2), implies the Principle of Indifference.<br /><br />In this post, I will raise an objection to this alternative argument.<br /><br />The HLWW argument turns on a mathematical theorem. It takes certain constraints -- (I), (II), (III) below -- and shows that, if an agent's credence function satisfies those constraints, then it must satisfy a particular instance of the Principle of Indifference.<br /><br /><b>Theorem 1</b> If there is $0 < x < 1$ such that<br />(I) $P(F | X) = P(F)$<br />(II) $P(A | FX) = x$<br />(III) $P(A | X (A \leftrightarrow F)) = x$<br />then <br />(IV) $P(F) = 0.5$.<br /><br />Now, the instance of the Principle of Indifference that HLWW wish to infer using this theorem is this:<br /><br /><b>Principle of Indifference (atomic case)</b> Suppose $F$ is an atomic proposition and $P_0$ is our agent's initial credence function. Then $P_0(F) = 0.5$.<br /><br />Thus, to obtain this from Theorem 1, we need the following: for each atomic $F$, there is $A$, $X$, and $0 < x < 1$ that satisfy (I), (II), and (III). Conditions 1 and 2 are intended to obtain this, but I think the argument is clearest if we argue for them directly, using the considerations found in HLWW.<br /><br />Thus, suppose $F$ is atomic. Then the idea is this. Pick a proposition $X$ with two features: (a) if you were to learn $X$ and nothing more as your first piece of evidence, it would place a very strict constraint on your credence in $A$ --- it would require you to have credence $x$ in $A$; (b) $X$ provides no information about $F$ nor about the relationship between $A$ and $F$. Now, providing that $A$ is not logically related to $F$, we might take $X$ to be the proposition $C^A_x$ that says that the objective chance of $A$ is $x$. By the Principal Principle, $C^A_x$ has the first feature (a): $P_0(A | X) = x$. What's more, since $A$ is logically independent of $F$, $C^A_x$ also has the second feature (b): in the absence of further evidence, and in particular evidence about the relationship between $A$ and $F$, $C^A_x$ provides no information about $F$ nor about the relationship between $A$ and $F$.<br /><br />Now, with $A$, $X$, $x$ in hand, we appeal to two principles concerning the way that we should respond to evidence:<br /><br />(Ev1): If your credence function is $P$ and your evidence does not provide any information about the connection between $B$ and $C$, then $P(B | C) = P(B)$.<br /><br />In slogan form, this says: <i>Ignorance entails irrelevance</i>. <br /><br />(Ev2): If you have strong evidence concerning $B$ and no evidence concerning $C$, then $P(B | B \leftrightarrow C) = P(B)$.<br /><br />In slogan form, as we will see: <i>Credences supported by stronger evidence are more resilient</i>. <br /><br />Now, from (Ev1), we immediately obtain (I) for our agent's initial credence function $P_0$ with $F$ atomic and $X = C^A_x$. After all, if you have no evidence, your evidence certainly does not provide any information about the connection between $C^A_x$ and $F$.<br /><br />From (Ev1) and the Principal Principle, we obtain (II) for $P_0$ with $F$ atomic and $X = C^A_x$. Suppose you first learn $C^A_x$ as evidence. So your credence function is $P_1(-) = P_0(-|C^A_x)$. Now, by hypothesis, $C^A_x$ provides no information about the connection between $F$ and $A$. Then, by (Ev1), $P_1(A | F) = P_1(A)$. So $P_0(A | F\ \&\ C^A_x) = P_0(A | C^A_x)$. And, by the Principal Principle, $P_0(A | C^A_x) = x$. So $P_0(A | F\ \&\ C^A_x) = x$.<br /><br />Finally, from (Ev2) and the Principal Principle, we (III) for $P_0$ with $F$ atomic and $X = C^A_x$. Again, suppose you learn $C^A_x$. So $P_1(-) = P_0(-|C^A_x)$. You thus have strong evidence concerning $A$ and no evidence concerning $F$. Thus, by (Ev2), $P_1(A | A \leftrightarrow F) = P_1(A)$. That is, $P_0(A | C^A_x\ \&\ (A \leftrightarrow F)) = P_0(A | C^A_x)$. And by the Principal Principle, $P_0(A | C^A_x) = x$. So $P_0(A | C^A_x\ \&\ (A \leftrightarrow F)) = x$.<br /><br />Thus, the plausibility of the HLWW argument turns on the plausibility of (Ev1) and (Ev2). Unfortunately, both beg the question concerning the Principle of Indifference. As a result, they cannot be assumed in a justification of that norm. Let's consider each in turn.<br /><br />First, (Ev1). If your evidence does not provide any information about the connection between $B$ and $C$, then this evidence leaves open the possibility that $B$ is positively relevant to $C$; it leaves open the possibility that $B$ is negatively relevant to $C$; and it leaves open the possibility that $B$ is irrelevant to $C$. But (Ev1) demands that we deny the first two possibilities and take $B$ to be irrelevant to $C$. But why? Without further argument, it seems that we would be equally justified in taking $B$ to be positively relevant to $C$ and equally justified in taking $C$ to be negatively relevant to $C$.<br /><br />Second, (Ev2). The idea is this: When I learn that two propositions, $B$ and $C$, are equivalent, there are many ways I might respond. I might retain my prior credence in $B$ and bring my credence in $C$ into line with that. Or I might retain my prior credence in $C$ and bring my credence in $B$ into line with that. Or I might do many other things. (Ev2) says that, if I have strong evidence concerning $B$ and no evidence concerning $C$, then I should opt for the first response and retain my prior credence in $B$ -- which was formed in response to the strong evidence concerning $B$ -- and bring my credence in $C$ into line with that -- since my prior credence in $C$ was, in any case, formed in response to no relevant evidence at all.<br /><br />Now, on the face of it, this seems like a reasonable constraint on our response to evidence. It says, essentially, that credence formed in response to stronger evidence should be more resilient than credence formed in response to weaker evidence. And, as a limiting case, credence formed in response to strong evidence, such as evidence about the chances, should be maximally resilient when compared to credence formed in response to no evidence. (Note that a similar way of thinking might give an alternative motivation for (II), since this is also a principle of resilient credence.)<br /><br />However, unfortunately, (Ev2) threatens to be inconsistent. After all, it is easy to suppose that there are propositions $B$, $C$, and $D$ such that you have strong evidence for $B$, but no evidence concerning $C$ or $D$ or $C\ \&\ D$ or $C\ \&\ \neg D$. But, in that situation, (Ev2) entails:<br /><br /><ul><li>$P(B | B \leftrightarrow C) = P(B)$</li><li>$P(B | B \leftrightarrow (C\ \&\ D)) = P(B)$</li><li>$P(B | B \leftrightarrow (C\ \&\ \neg D)) = P(B)$ </li></ul><br />And unfortunately these are inconsistent constraints on a probability function. To avoid this inconsistency, the defender of (Ev2) must say that, in fact, our lack of evidence concerning $C$, $D$, $C\ \&\ D$ and $C\ \&\ \neg D$ indeed counts as no evidence concerning $C$ and $D$, but does count as evidence concerning $C\ \&\ D$ and $C\ \&\ \neg D$. How might they do that? Well, they might note that, while $C$ and $D$ are each true in half the possible worlds, since they are atomic, $C\ \&\ D$ and $C\ \&\ \neg D$ are true only in a quarter of the possible worlds. And thus a lack of evidence is in fact evidence against them. But of course this line of argument appeals to the Principle of Indifference. Only if you think that every world should receive equal credence will you think that a lack of evidence counts as no evidence for a proposition that is true at half of the possible worlds, but counts as genuine evidence against a proposition that is true at only a quarter of the worlds.<br /><br />Thus, I conclude that the HLWW argument fails. While (Ev1) and (Ev2) may be true, we cannot appeal to them in order to justify the Principle of Indifference, since they can only be defended by appealing to the Principle of Indifference itself.Richard Pettigrewhttp://www.blogger.com/profile/07828399117450825734noreply@blogger.com14tag:blogger.com,1999:blog-4987609114415205593.post-70283494038819106682017-01-17T10:52:00.000+00:002017-01-25T17:14:43.250+00:00The Principal Principle does not imply the Principle of IndifferenceRecently, <a href="http://james-hawthorne.oucreate.com/" target="_blank">James Hawthorne</a>, <a href="https://jlandes.wordpress.com/" target="_blank">Jürgen Landes</a>, <a href="https://kent.academia.edu/ChristianWallmann" target="_blank">Christian Wallmann</a>, and <a href="http://blogs.kent.ac.uk/jonw/" target="_blank">Jon Williamson</a> published a <a href="https://bjps.oxfordjournals.org/content/early/2015/07/13/bjps.axv030.abstract" target="_blank">paper</a> in the <i>British Journal of Philosophy of Science</i> in which they claim that the Principal Principle entails the Principle of Indifference -- indeed, the paper is called 'The Principal Principle implies the Principle of Indifference'. In this post, I argue that it does not.<br /><br />All Bayesian epistemologists agree on two claims. The first, which we might call <i>Precise Credences</i>, says that an agent's doxastic state at a given time $t$ in her epistemic life can be represented by a single credence function $P_t$, which assigns to each proposition $A$ about which she has an opinion a precise numerical value $P_t(A)$ that is at least 0 and at most 1. $P_t(A)$ is the agent's credence in $A$ at $t$. It measures how strongly she believes $A$ at $t$, or how confident she is at $t$ that $A$ is true. The second point of agreement, which is typically known as <i>Probabilism</i>, says that an agent's credence function at a given time should be a probability function: that is, for all times $t$, $P_t(\top) = 1$ for any tautology $\top$, $P_t(\bot) = 0$ for any contradiction $\bot$, and $P_t(A \vee B) = P_t(A) + P_t(B) - P_t(AB)$ for any propositions $A$ and $B$.<br /><br />So Precise Credences and Probabilism form the core of Bayesian epistemology. But, beyond these two norms, there is little agreement between its adherents. Bayesian epistemologists disagree along (at least) two dimensions. First, they disagree about the correct norms concerning updating on evidence learned with certainty --- some say they are diachronic norms concerning how an agent should in fact update; others say that there are only synchronic norms concerning how an agent should plan to update; and others think there are no norms concerning updating at all. Second, they disagree about the stringency of the synchronic norms that don't concern updating. Our concern here is with the latter. Some candidates norms of this sort: the Principal Principle, which says how an agent's credences in propositions concerning the objective chances should relate to her credences in other propositions (Lewis 1980); the Reflection Principle, which says how an agent's current credences in propositions concerning her future credences should relate to her current credences in other propositions (van Fraassen 1984, Briggs 2009); and the Principle of Indifference, which says, roughly, that an agent with no evidence should divide her credences equally over all possibilities (Keynes 1921, Carnap 1950, Jaynes 2003, Williamson 2010, Pettigrew 2014). Those we might call <i>Radical Subjective Bayesians</i> adhere to Precise Credences and Probabilism, but reject the Principal Principle, the Reflection Principle, and the Principle of Indifference. Those we might call <i>Moderate Subjective Bayesians</i> adhere to Precise Credences, Probabilism, and the Principal Principle (and also, quite often, the Reflection Principle), but they reject the Principle of Indifference. And the <i>Objective Bayesians</i> accept all of the principles.<br /><br />In a recent paper, Hawthorne et al. (2015) (henceforth, HLWW) argue that Moderate Subjective Bayesianism is an inconsistent position, because the Principal Principle (and, indeed the Reflection Principle) entails the Principle of Indifference. Thus, it is inconsistent to accept the former and reject the latter. We must either reject the Principal Principle, as the Radical Subjective Bayesian does, or accept it together with the Principle of Indifference, as the Objective Bayesian does.<br /><br />Notoriously, as Lewis originally stated it, the Principal Principle includes an <i>admissibility condition</i> (266-7, Lewis 1980). Equally notoriously, Lewis did not provide a precise account of this condition, thereby leaving his formulation of the principle similarly imprecise. HLWW do not give a precise account either. But they do appeal to two principles that they take to follow intuitively from the Principal Principle. And from these two principles, together with the Principal Principle itself, they derive what they take to be an instance of the Principle of Indifference. The first principle to which they appeal --- their Condition 1 --- is in fact provable, as they note. The second --- their Condition 2 --- is not. Indeed, as we will see, on the correct understanding of admissibility, it is false. Thus, the HLWW argument fails. What's more, its conclusion is not true. It is possible to satisfy the Principal Principle without satisfying the Principle of Indifference, as we will see below. Moderate Subjective Bayesianism is a coherent position.<br /><br /><br /><h2>Introducing the Principal Principle</h2><br />We begin by introducing the Principal Principle. To aid our statement, let me introduce a piece of notation. Given a proposition $A$ and a real number $0 \leq x \leq 1$, let $C^A_x$ be the following proposition: <i>The current objective chance of $A$ is $x$</i>. And we will let $P_0$ be the credence function of our agent at the very beginning of her epistemic life --- when she is, as Lewis would say, a <i>superbaby</i>; that is, she is not yet in receipt of any evidence. Then, as Lewis originally formulates the Principal Principle, it says this:<br /><br /><b>Lewis' Principal Principle</b> Suppose $A$, $E$ are propositions and $0 \leq x \leq 1$. Then it should be the case that $$P_0(A | C^A_xE) = x $$providing (i) $P_0(C^A_xE) > 0$, and (ii) $E$ is admissible for $A$.<br /><br />In this version, the principle applies to an agent only at the beginning of her epistemic life; it governs her initial credence function. In this situation, the principle says, her credence in a proposition $A$ conditional on the conjunction of some proposition $E$ and a chance proposition that says that the chance of $A$ is $x$ should be $x$, providing the conditional probability is well-defined and $E$ is admissible for $A$.<br /><br />The motivation for the admissibility condition is this. Suppose $E$ entails $A$. Then we surely don't want to demand that $P_0(A | C^A_xE) = x$. After all, if $x < 1$, then such a demand would conflict with Probabilism, since it is a consequence of Probabilism that, if $E$ entails $A$, then $P_0(A | C^A_xE) = 1$. Thus, we must at least restrict the Principal Principle so that it does not apply when $E$ entails $A$. But there are other cases in which the Principal Principle should not be imposed, even if such an application would not be outright inconsistent with other norms such as Probabilism. For instance, suppose that $E$ entails that the chance of $A$ at some time in the future is $x' \neq x$. Then, again, we don't want to require that $P_0(A | C^A_xE) = x$. The moral is this: if $E$ contains information about $A$ that <i>overrides</i> the information that the current chance of $A$ gives about $A$, then it is inadmissible. Clearly any proposition that logically entails $A$ provides information that overrides the current chance information about $A$; and so does a proposition that entails something about the future chance of $A$. So much for propositions that are inadmissible. Are there any we can be sure are admissible? According to Lewis, there are, namely, propositions solely concerning the past or the present. Thus, Lewis does not give a precise account of admissibility: he gives a heuristic --- $E$ is admissible for $A$ if $E$ does not provide information about $A$ that overrides the information contained in propositions about the current chance of $A$ --- and he gives examples of propositions that do and do not provide such information --- I've recalled some of Lewis' examples here.<br /><br />Now, as Lewis himself noted, the Principal Principle has implausible consequences when the chances are self-undermining --- that is, when the chances assign a positive probability to outcomes in which the chances are different. This happens, for instance, for Lewis' own favoured account of chance, the Humean account or Best System Analysis. This lead to reformulations of the Principal Principle, such as Thau's and Hall's New Principle (Lewis 1994, Thau 1994, Hall 1994) and Ismael's General Recipe (Ismael 2008). HLWW say nothing explicitly about whether or not chances are self-undermining. But, since they are interested in investigating the Principal Principle and not the New Principle or the General Recipe, I take them to assume that chances are not self-undermining. I will do likewise.<br /><br /><h2><b>The HLWW argument</b></h2><br />However imprecise Lewis' account of admissibility is, HLWW take it to be precise enough to allow us to be confident of the following principles:<br /><br /><b>Condition 1 </b> If<br />(1a) $E$ is admissible for $A$, and<br />(1b) $C^A_xE$ contains no information that renders $F$ relevant to $A$,<br />then<br />(1c) $EF$ is admissible for $A$.<br /><br />Now, HLWW propose to make (1b) precise as follows: $$P_0(A | FC^A_xE) = P_0(A | C^A_xE)$$ That is, $C^A_xE$ contains no information that renders $F$ relevant to $A$ just in case $C^A_xE$ renders $A$ probabilistically independent of $F$. With that explication in hand, Condition 1 now actually follows logically from Lewis' Principal Principle, as HLWW note. After all, by (1a) and Lewis' Principal Principle, $P_0(A | C^A_xE) = x$. And, by the explication of (1b), $P_0(A | C^A_xE) = P_0(A | FC^A_xE)$. Daisychaining these identities together, we have $P_0(A | FC^A_xE) = x$, which is (1c).<br /><br /><b>Condition 2</b> If<br />(2a) $E$ is admissible for $A$, and<br />(2b) $C^A_xE$ contains no information that renders $F$ relevant to $A$,<br />then<br />(2c) $E(A \leftrightarrow F)$ is admissible for $A$.<br /><br />This is not provable. Indeed, as we will see below, it is false. Nonetheless, together with Lewis' Principal Principle, Conditions 1 and 2 entail a constraint on an agent's credence function that HLWW take to be the constraint imposed by the Principle of Indifference.<br /><br /><b>Proposition 1 </b>Suppose Lewis' Principal Principle together with Conditions 1 and 2 hold. And suppose that there are propositions $A$, $E$, and $F$ and $0 < x < 1$ such that $E$ is admissible for $A$. Suppose further that $F$ is atomic and contingent. Then<br /><br />(i) If $C^A_xE$ contains no information that renders $F$ relevant to $A$, then the following is required of the agent's initial credence function: $P_0(F | C^A_xE) = 0.5.$<br /><br />(ii) If $C^A_xE$ contains no information whatsoever about $F$ (so that $P_0(F | C^A_xE) = P_0(F)$), then the following is required of the agent's initial credence function: $P_0(F) = 0.5$<br /><br />HLWW take Proposition 1 to show that the Principle of Indifference follows from the Principal Principle. After all, Condition 1 is simply a theorem. And they take Condition 2 to be a consequence of the Principal Principle, given the correct understanding of admissibility. So if you assume the Principal Principle, you get all of the hypotheses of the theorem. However, as we will see in the next two sections, Condition 2 is in fact false.<br /><br /><h2>Levi's Principal Principle and Levi-Admissibility</h2><br />Above, we stated the Principal Principle as follows:<br /><br /><b>Lewis' Principal Principle</b> $P_0(A | C^A_xE) = x$, providing (i) $P_0(C^A_xE) > 0$, and (ii) $E$ is admissible for $A$.<br /><br />Now suppose we make the following assumption about admissibility:<br /><br /><b>Current Chance Admissibility</b> Propositions about the current objective chances are admissible.<br /><br />Thus, for instance, $P_0(A | C^A_xC^B_y) = x$, providing $P_0(C^A_xC^B_y) > 0$, which also ensures that $C^A_x$ and $C^B_y$ are compatible.<br /><br />Now suppose that, if $ch$ is a probability function defined over all the propositions about which the agent has an opinion, $C_{ch}$ is the proposition that says that the objective chances are given by $ch$. Then it follows from the Principal Principle and Current Chance Admissibility that $P_0(A | C_{ch}) = ch(A)$. But it also follows from this that:<br /><br /><b>Levi's Principal Principle</b> (Bodgan 1984, Pettigrew 2012) $P_0(A | C_{ch}E) = ch(A | E)$, providing $P_0(C_{ch}E), ch(E) > 0$.<br /><br />This is a version of the Principal Principle that makes no mention of admissibility. From it, something close to Lewis' Principal Principle follows: If $P_0(C^{A|E}_x E) > 0$, then $$P_0(A | C^{A|E}_x E) = x$$ where $C^{A|E}_x$ is the proposition: <i>The current objective chance of $A$ conditional on $E$ is $x$</i>. What's more, while Levi's version does not mention admissibility, since it applies equally when the proposition $E$ is not admissible, it does suggest a precise account of admissibility. And it is possible to show that, if we take the version of Lewis' Principal Principle that results from understanding admissibility in this way, it is a consequence of Levi's Principal Principle.<br /><br /><b>Levi-Admissibility</b> <i>$A$ is Levi-admissible for $E$</i> if, for all possible chance functions $ch$, $ch(A | E) = ch(A)$. <br /><br />That is, on this account $A$ is admissible for $E$ if every chance function renders $A$ and $E$ stochastically independent. Three points are worthy of note:<br /><ol><li>All propositions providing future information about the chance of $A$ or information about the truth value of $A$ are Levi-inadmissible, since $A$ will be stochastically dependent on such propositions according to all possible current chance functions. So this account of admissibility agrees with the examples of clearly inadmissible propositions that we gave above.</li><li>All propositions solely about the past are Levi-admissible, since all such propositions will now be true or false and will be assigned chance 1 or 0 accordingly by all possible current chance functions. So this account of admissibility agrees with the examples of clearly admissible propositions that we gave above.</li><li>If $A$ is Levi-admissible for $E$, then $P_0(A | C^A_xE) = P_0(A | C^{A|E}_xE ) = x$. That is, Lewis' Principal Principle follows from Levi's version if we understand Lewis' notion of admissibility as Levi-admissibility.</li></ol>Taken together, (1), (2), and (3) entail that Levi-admissibility has all of the features that Lewis wished admissibility to have.<br /><br />Now, although Levi's account of admissibility recovers Lewis' examples, it might seem to be too demanding. Suppose, for instance, that $A$ is a proposition concerning the toss of a coin in Quito --- it says that it will lands heads --- while $E$ is a proposition concerning tomorrow's weather in Addis Ababa --- it says that it will rain. Then, intuitively, $E$ is admissible for $A$. But $E$ is not Levi-admissible for $A$. After all, we are considering an agent at the beginning of her epistemic life. And so there are certainly possible chance functions --- probability functions that, for all she knows, give the objective chances --- that do not render $E$ and $A$ stochastically independent.<br /><br />However, in fact, on closer inspection, the Levi-admissibility verdict is exactly right. Consider my credence in $A$ conditional on $E$ and the chance hypothesis $C^A_{0.5}$, which says that the coin in Quito is fair and so the unconditional chance of $A$ is 0.5. Amongst the chance functions that are epistemically possible for me, some make $E$ irrelevant to $A$, some make it positively relevant to $A$ and some make it negatively relevant to $A$. Indeed, we might suppose that the possible chances of $A$ conditional on $E$ run the full gamut of values from 0 to 1. In that case, surely we don't want to say that $E$ is admissible for $A$ and thereby impose, via the Principal Principle, the demand that our agent's credence in $A$ conditional on $E$ and $C^A_{0.5}$ is 0.5. After all, if I choose to place most of my prior credence on the chance hypotheses on which $E$ is positively relevant to $A$, then my credence in $A$ conditional on $E$ and $C^A_{0.5}$ should not be 0.5 --- it should be something greater than 0.5. If I choose to place most of my prior credence on the chance hypotheses on which $E$ is negatively relevant to $A$, then my credence in $A$ conditional on $E$ and $C^A_{0.5}$ should not be 0.5 --- it should be something less than 0.5. Of course, we might think that it is irrational for our agent, a superbaby with no evidence one way or the other, to favour the positive relevance hypotheses over those that posit neutral relevance and negative relevance. We might think that she should spread her credences equally over all of the possibilities, in which case their effects will cancel out, and her credence in $A$ conditional on $E$ and $C^A_{0.5}$ will indeed be 0.5. But of course to do this is to assume the Principle of Indifference and beg the question.<br /><br /><h2>The failure of Condition 2</h2><br />With this precise account of admissibility in hand, we can now test to see whether or not it vindicates Condition 2 --- recall, HLWW claim that this is a consequence of the Principal Principle. As we saw above, Condition 2 runs as follows:<br /><br /><b>Condition 2</b> If<br />(2a) $E$ is admissible for $A$, and<br />(2b) $C^A_xE$ contains no information that renders $F$ relevant to $A$,<br />then<br />(2c) $E(A \leftrightarrow F)$ is admissible for $A$.<br /><br />Now suppose that Lewis' Principal Principle is true, and assume that admissibility means Levi-admissibility. Then this is equivalent to:<br /><br /><b>Condition 2$^*$</b> If $ch$ is a possible chance function, and<br />(2a$^*$) $ch(A | E) = ch(A)$, and<br />(2b$^*$) $ch(A | FE) = ch(A | E)$,<br />then<br />(2c$^*$) $ch(A | E(A \leftrightarrow F)) = ch(A)$.<br /><br />However, this is false. Indeed, we can show the following:<br /><br /><b>Proposition</b> <b>2</b> For any value $0 \leq y \leq 1$, there is a chance function $ch$ such that (2a$^*$) and (2b$^*$) hold, but $$ch(A | E(A \leftrightarrow F)) = y$$<br /><br />Thus, (2a$^*$) and (2b$^*$) impose no constraints whatsoever on the chance of $A$ conditional on $E(A \leftrightarrow F)$.<br /><br />Thus, it is possible that $E$ is Levi-admissible for $A$ and that $C^A_xE$ carries no information whatsoever about $F$, and yet $E(A \leftrightarrow F)$ is not Levi-admissible for $A$. Thus, Condition 2 is false and the HLWW argument fails.<br /><br /><h2>Levi's Principal Principle and the Principle of Indifference</h2><br />Of course, the failure of an argument does not entail the falsity of its conclusion. It might yet be the case that the Principal Principle entails the Principle of Indifference, even if the HLWW argument does not show that. But in fact we can show that this is not true. To see this, we note a sufficient condition for satisfying Levi's Principal Principle:<br /><br /><b>Proposition 3</b> Suppose $C$ is the set of all possible chance functions. Then, if $P_0$ is in the convex hull of $C$, then $P_0(A | C_{ch} E) = ch(A | E)$.<br /><br />Now, if Levi's Principal Principle entails the Principle of Indifference, and the Principle of Indifference entails that every atomic proposition has probability 0.5, then it follows that every member of the convex hull of the set of possible chance functions must assign probability 0.5 to every atomic proposition. But it is easy to see that this is not true. Let $F$ be the atomic proposition that says that a sample of uranium will decay at some point in the next hour. In the absence of evidence, the possible chances of $F$ range over the full unit interval from 0 to 1. Thus, there are members of the convex hull of the set of possible chance functions that assign probabilities other than 0.5 to $F$. And, by Proposition 3, these members will satisfy Levi's Principal Principle.<br /><br /><h2>Applying Levi's Principal Principle</h2><br />A possible objection: Levi's Principal Principle is all well and good in theory, but it is not applicable. Suppose we are interested in a proposition $A$; and we have collected evidence $E$. How might we apply Levi's Principal Principle in order to set our credence in $A$? In the case of Lewis' version of the principle, we need only know the chance of $A$ and the fact that $E$ is admissible for $A$, and we often know both of these. But, in order to apply Levi's version, we must know the chance of $A$ <i>conditional on our evidence $E$</i>. And, at least for large and varied bodies of evidence, we never know this. Or so the objection goes.<br /><br />But the objection fails. In fact, Levi's Principal Principle may be applied in those cases. You don't have to know the chance of $A$ conditional on $E$ in order to set your credence in $A$ when you have evidence $E$. You simply have to have opinions about the different possible values that that conditional chance might take. You then apply Levi's Principal Principle, together with the Law of Total Probability, which jointly entail that your credence in $A$ given $E$ should be your expectation of the chance of $A$ given $E$. Of course, neither Levi's Principal Principle nor the Law of Total Probability will tell you how to set your credences in the different possible values that the conditional chance of $A$ given $E$ might take. But that's not a problem for the Moderate Subjective Bayesian, who doesn't expect her evidence to pin down a unique credal response. Only the Objective Bayesian would expect that. You pick your probability distribution over those possible conditional chance values and Levi's Principal Principle does the rest via the Law of Total Probability.<br /><br /><h2>Conclusion</h2><br /><br />The HLWW argument purports to show that the Principal Principle entails the Principle of Indifference. But it fails because, on the correct understanding of admissibility, Condition 2 is not a consequence of the Principal Principle; and indeed it is false. What's more, we can see that there are credence functions that satisfy the correct version of the Principal Principle --- namely, Levi's Principal Principle --- that do not satisfy the Principle of Indifference. The logical space is therefore safe once again for Moderate Subjective Bayesians, that is, those who accept Precise Credences, Probabilism, the Principal Principle (and perhaps the Reflection Principle), but who deny the Principle of Indifference.<br /><br /><br /><h2>References</h2><ul><li>Bogdan, R. (Ed.) (1984). <i>Henry E. Kyburg, Jr. and Isaac Levi.</i> Dordrecht: Reidel.</li><li>Briggs, R. (2009). Distorted Reflection. <i>Philosophical Review</i>, 118(1), 59–85.</li><li>Carnap, R. (1950). <i>Logical Foundations of Probability</i>. Chicago: University of Chicago Press.</li><li>Hall, N. (1994). Correcting the Guide to Objective Chance. <i>Mind</i>, 103, 505–518.</li><li>Hawthorne, J., Landes, J., Wallman, C., & Williamson, J. (2015). The Principal Principle Implies the Principle of Indifference. <i>The British Journal for the Philosophy of Science</i>. </li><li>Ismael, J. (2008). Raid! Dissolving the Big, Bad Bug. <i>Noûs</i>, 42(2), 292–307.</li><li>Jaynes, E. T. (2003). <i>Probability Theory: The Logic of Science</i>. Cambridge, UK: Cambridge University Press.</li><li>Keynes, J. M. (1921). <i>A Treatise on Probability</i>. London: Macmillan.</li><li>Lewis, D. (1980). A Subjectivist’s Guide to Objective Chance. In R. C. Jeffrey (Ed.) <i>Studies in Inductive Logic and Probability</i>, vol. II. Berkeley: University of California Press.</li><li>Lewis, D. (1994). Humean Supervenience Debugged. <i>Mind</i>, 103, 473–490.</li><li>Pettigrew, R. (2012). Accuracy, Chance, and the Principal Principle. <i>Philosophical<br />Review,</i> 121(2), 241–275.</li><li>Pettigrew, R. (2014). Accuracy, Risk, and the Principle of Indifference. <i>Philosophy<br />and Phenomenological Research</i>.</li><li>Thau, M. (1994). Undermining and Admissibility. <i>Mind</i>, 103, 491–504.</li><li>van Fraassen, B. C. (1984). Belief and the Will. <i>Journal of Philosophy</i>, 81, 235–56.</li><li>Williamson, J. (2010). <i>In Defence of Objective Bayesianism</i>. Oxford: Oxford University Press. </li></ul>Richard Pettigrewhttp://www.blogger.com/profile/07828399117450825734noreply@blogger.com0tag:blogger.com,1999:blog-4987609114415205593.post-56419652168966516382016-12-20T12:58:00.000+00:002016-12-20T12:58:55.274+00:00Assistant professorship in mathematical philosophy, University of Gdansk<div dir="ltr" style="text-align: left;" trbidi="on"><b style="text-align: justify;"><br class="Apple-interchange-newline" />Assistant Professorship</b><span style="text-align: justify;"> (“adiunkt” in Polish terminology) in the Chair of Logic, Philosophy of Science and Epistemology is available at the Department of Philosophy, Sociology and Journalism, University of Gdansk, Poland. The position is to start sometime between July 1 and September 1, 2017, for a fixed period of time with the possibility of extension. Decisions about the exact beginning date of the contract and the number of years will be made during the hiring process. No knowledge of Polish is required.</span><br /><span style="text-align: justify;"><br /></span><span style="text-align: justify;">Details available <a href="http://entiaetnomina.blogspot.in/2016/12/assistant-professorship-in-mathematical.html">here</a>.</span></div>Rafal Urbaniakhttp://www.blogger.com/profile/10277466578023939272noreply@blogger.com2tag:blogger.com,1999:blog-4987609114415205593.post-41751717144951664482016-12-18T09:07:00.001+00:002016-12-18T09:07:38.944+00:00Call for submissions: PhDs in Logic IX, Bochum, 2nd - 4th May 2017<div dir="ltr" style="text-align: left;" trbidi="on"><div class="p1"><span class="s1">PhDs in Logic is an annual graduate conference organised by local graduate students. This interdisciplinary conference welcomes contributions to various topics in mathematical logic, philosophical logic, and logic in computer science. It involves tutorials by established researchers as well as short (20 minutes) presentations by PhD students, master students and first-year postdocs on their research.</span></div><div class="p1"><span class="s1">We are happy to announce that the ninth edition of PhDs in Logic will take place at the Ruhr University Bochum, Germany, during 2nd - 4th May 2017.</span></div><div class="p2"><span class="s1"></span><br /></div><div class="p1"><span class="s1">Confirmed tutorial speakers are :</span></div><div class="p1"><span class="s1">Petr Cintula (Czech Academy of Sciences)</span></div><div class="p1"><span class="s1">María Manzano (University of Salamanca)</span></div><div class="p1"><span class="s1">João Marcos (University of Natal)</span></div><div class="p1"><span class="s1">Gabriella Pigozzi (Paris Dauphine University)</span></div><div class="p1"><span class="s1">Christian Straßer (Ruhr-University Bochum)</span></div><div class="p1"><span class="s1">Heinrich Wansing (Ruhr-University Bochum)</span></div><div class="p2"><span class="s1"></span><br /></div><div class="p1"><span class="s1">Abstract submission:</span></div><div class="p1"><span class="s1">PhD students, master students and first-year postdocs in logic from disciplines, that include but are not limited to philosophy, mathematics and computer science are invited to submit an extended abstract on their research. Submitted abstracts should be between 2 and 3 pages, including the relevant references. Each abstract will be anonymously reviewed by the scientific committee. Accepted abstracts will be presented by their authors in a 20-minute presentation during the conference. The deadline for abstract submission is 2nd February 2017. Please submit your blinded abstract via: <a href="https://easychair.org/conferences/?conf=phdsinlogic9"><span class="s2">https://easychair.org/conferences/?conf=phdsinlogic9</span></a></span></div><div class="p2"><span class="s1"></span><br /></div><div class="p1"><span class="s1">For more information please see:</span></div><div class="p3"><span class="s3"><a href="http://www.ruhr-uni-bochum.de/phdsinlogicix">http://www.ruhr-uni-bochum.de/phdsinlogicix</a></span></div><div class="p2"><span class="s1"></span><br /></div><div class="p1"><span class="s1">Local organisers:</span></div><style type="text/css">p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 13.0px Helvetica; -webkit-text-stroke: #000000} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 13.0px Helvetica; -webkit-text-stroke: #000000; min-height: 16.0px} p.p3 {margin: 0.0px 0.0px 0.0px 0.0px; font: 13.0px Helvetica; color: #4787ff; -webkit-text-stroke: #4787ff} span.s1 {font-kerning: none} span.s2 {text-decoration: underline ; font-kerning: none; color: #4787ff; -webkit-text-stroke: 0px #4787ff} span.s3 {text-decoration: underline ; font-kerning: none} </style> <br /><div class="p1"><span class="s1">Christopher Badura, AnneMarie Borg, Jesse Heyninck and Daniel Skurt</span></div></div>Rafal Urbaniakhttp://www.blogger.com/profile/10277466578023939272noreply@blogger.com0tag:blogger.com,1999:blog-4987609114415205593.post-90066853208597734362016-10-27T16:23:00.000+01:002016-10-27T16:23:35.757+01:00Assistant Professorship at the MCMPLudwig-Maximilians-University Munich is seeking applications for one<br /><br /><b>Assistant Professorship position in Logic and Philosophy of Language</b><br />(for three years, with the possibility of extension)<br /><br />at the Chair of Logic and Philosophy of Language (Professor Hannes Leitgeb) and the Munich Center for Mathematical Philosophy (MCMP) at the Faculty of Philosophy, Philosophy of Science, and Study of Religion. The position, which is to start on April 1st 2017, is for three years with the possibility of extension.<br /><br />The appointee will be expected (i) to do philosophical research, especially in logic and philosophy of language, (ii) to teach five hours a week in areas relevant to the chair, and (iii) to participate in the administrative work of the MCMP.<br /><br />The successful candidate will have a PhD in philosophy or logic, will have teaching experience in philosophy and logic, and will have carried out research in logic and related areas (such as philosophy of logic, philosophy of language, philosophy of mathematics, formal epistemology).<br /><br />Women are currently underrepresented in the Faculty, therefore we particularly welcome applications for this post from suitably qualified female candidates. Furthermore, given equal qualification, severely physically challenged individuals will be preferred.<br /><br />Applications (including CV, certificates, list of publications), a description of planned research projects (1000-1500 words), and letters of reference of two referees should be sent either by email (ideally all requested documents in just one PDF document) or by mail to<br /><br />Ludwig-Maximilians-Universität München<br />Faculty of Philosophy, Philosophy of Science and Study of Religion<br />Chair of Logic and Philosophy of Language / MCMP<br />Geschwister-Scholl-Platz 1<br />80539 München<br />E-Mail: <a href="mailto:office.leitgeb@lrz.uni-muenchen.de" target="_blank">office.leitgeb@lrz.uni-muenchen.de</a><br /><br />by<br /><br /><b>December 1st, 2016</b>.<br /><br />If possible at all, we very much prefer applications by email.<br /><br />Contact for informal inquiries: office.leitgeb@lrz.uni-muenchen.de<br /><br />More information about the MCMP can be found at <a href="http://www.mcmp.philosophie.uni-muenchen.de/index.html" target="_blank">http://www.mcmp.philosophie.uni-muenchen.de/index.html</a>.<br /><br />The German description of the position is to be found at <a href="http://www.uni-muenchen.de/aktuelles/stellenangebote/wissenschaft/20161017140416.html" target="_blank">http://www.uni-muenchen.de/aktuelles/stellenangebote/wissenschaft/20161017140416.html</a>. <br /><br />*****<br /><br />Vincenzo Crupihttp://www.blogger.com/profile/08069145846190162517noreply@blogger.com3tag:blogger.com,1999:blog-4987609114415205593.post-7736434362312027342016-10-12T09:04:00.001+01:002016-10-12T09:04:34.787+01:00Entia et Nomina 2017 CFP<div dir="ltr" style="text-align: left;" trbidi="on"><div style="text-align: justify;">The “Entia et Nomina” series features English language workshops for researchers in formally oriented philosophy, in particular in logic, philosophy of science, formal epistemology and philosophy of language. The aim of the workshop is to foster cooperation among philosophers with a formal bent. Previous editions took place at Gdansk University, Ghent University (as part of the Trends in Logic series), Jagiellonian University, and Warsaw University. The sixth conference in the series will take place in Palolem, Goa, India, on 29 January - 5 February 2017. Invited speakers confirmed so far include:</div><br />Krzysztof Posłajko (Jagiellonian University)<br />Katarzyna Kijania-Placek (Jagiellonian University)<br />Tomasz Placek (Jagiellonian University)<br />Nina Gierasimczuk (Danish Technical University)<br />Cezary Cieślinski (Warsaw University)<br />Marcello Dibello (City University of New York)<br /><br /><br /><div style="text-align: justify;">Authors of contributed papers are requested to submit short (up to 2 normalized pages) and extended (up to 6 pages) abstracts, prepared for blind-review, in PDF format, by 30.10.2016. Decisions about acceptance will be communicated by 20.11.2016.</div><br /><div style="text-align: justify;">Authors of accepted papers will have 40 minutes to present their work. Each paper will be followed by a 10 minute commentary prepared beforehand by another participant. Accepted participants might also be asked to comment on at least one talk. Commentaries will be followed by 10-15 minutes of discussion. Applications can be made also for the role of commentator only, in which case only a short CV is requested. We aim to make the short versions of accepted papers available to the participants ahead of the conference.</div><br />Please send your abstracts, questions and any inquiries to both Rafal Urbaniak <rfl.urbaniak@gmail.com> and Juliusz Doboszewski <jdoboszewski@gmail.com>.<br /><div><br /></div></div>Rafal Urbaniakhttp://www.blogger.com/profile/10277466578023939272noreply@blogger.com11tag:blogger.com,1999:blog-4987609114415205593.post-72966249631765313042016-10-04T22:25:00.001+01:002016-10-04T22:52:45.094+01:00CFA: The Fifth Reasoning Club Conference<span class="gmail_msg" style="background-color: white; color: #212121; font-family: "garamond" , serif; font-size: 13px;"><b class="gmail_msg">Call for Abstracts:<span class="m_3414633616540520217inbox-inbox-m_6107395369616152815inbox-inbox-Apple-converted-space gmail_msg"> </span><b class="gmail_msg"><span class="gmail_msg" style="color: black;">The Fifth<span class="m_3414633616540520217inbox-inbox-m_6107395369616152815inbox-inbox-Apple-converted-space gmail_msg"> </span>Reasoning<span class="m_3414633616540520217inbox-inbox-m_6107395369616152815inbox-inbox-Apple-converted-space gmail_msg"> </span>Club<span class="m_3414633616540520217inbox-inbox-m_6107395369616152815inbox-inbox-Apple-converted-space gmail_msg"> </span>Conferenc<wbr></wbr>e</span></b><br class="gmail_msg" /><span class="gmail_msg" style="font-weight: normal;">University of<span class="m_3414633616540520217inbox-inbox-m_6107395369616152815inbox-inbox-Apple-converted-space gmail_msg"> </span></span></b><span class="gmail_msg">Torino</span><b class="gmail_msg"><span class="gmail_msg" style="font-weight: normal;">,<span class="m_3414633616540520217inbox-inbox-m_6107395369616152815inbox-inbox-Apple-converted-space gmail_msg"> </span></span><b class="gmail_msg"><span class="gmail_msg" style="font-weight: normal;">18-19 May 2017</span></b></b></span><br /><div class="gmail_msg" style="color: #212121; font-family: "helvetica neue", helvetica, arial, sans-serif; font-size: 13px;"><span class="gmail_msg" style="background-color: white; font-family: "garamond" , serif;">Keynote speakers: </span></div><div class="gmail_msg" style="color: #212121; font-family: "helvetica neue", helvetica, arial, sans-serif; font-size: 13px;"><span class="gmail_msg" style="background-color: white; font-family: "garamond" , serif;"><a class="gmail_msg" data-saferedirecturl="https://www.google.com/url?q=http://fitelson.org/&source=gmail&ust=1475702617593000&usg=AFQjCNE49tclPR6DX2P6toA1ZBrDnlsaaA" href="http://fitelson.org/" style="border: 0px; color: #2c67d1; font-size: inherit; font-stretch: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; position: relative; text-decoration: none; vertical-align: baseline; z-index: 0;" target="_blank">Branden FITELSON</a><span class="m_3414633616540520217inbox-inbox-Apple-converted-space gmail_msg" style="color: #1c2024; font-size: 14px;"> </span><span class="gmail_msg" style="color: #1c2024; font-size: 14px;">(Northeastern University, Boston)</span></span><br /><div class="gmail_msg" style="border: 0px; color: #1c2024; font-size: 14px; font-stretch: inherit; font-variant-numeric: inherit; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;"><span class="gmail_msg" style="background-color: white; font-family: "garamond" , serif;"><a class="gmail_msg" data-saferedirecturl="https://www.google.com/url?q=http://www.rug.nl/staff/jeanne.peijnenburg/research&source=gmail&ust=1475702617593000&usg=AFQjCNEaPkK9wQTKl-Y38856_i-E5VzICw" href="http://www.rug.nl/staff/jeanne.peijnenburg/research" style="border: 0px; color: #2c67d1; font-size: inherit; font-stretch: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; position: relative; text-decoration: none; vertical-align: baseline; z-index: 0;" target="_blank">Jeanne PEIJNENBURG</a><span class="m_3414633616540520217inbox-inbox-Apple-converted-space gmail_msg"> </span>(University of Groningen)</span></div><div class="gmail_msg" style="border: 0px; color: #1c2024; font-size: 14px; font-stretch: inherit; font-variant-numeric: inherit; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;"><span class="gmail_msg" style="background-color: white; font-family: "garamond" , serif;"><a class="gmail_msg" data-saferedirecturl="https://www.google.com/url?q=https://www5.unitn.it/People/en/Web/Persona/PER0003393%23INFO&source=gmail&ust=1475702617593000&usg=AFQjCNFgyVHBcE4u-QXJOTw9EXmR5gxatQ" href="https://www5.unitn.it/People/en/Web/Persona/PER0003393#INFO" style="border: 0px; color: #2c67d1; font-size: inherit; font-stretch: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; position: relative; text-decoration: none; vertical-align: baseline; z-index: 0;" target="_blank">Katya TENTORI</a><span class="m_3414633616540520217inbox-inbox-Apple-converted-space gmail_msg"> </span>(University of Trento)</span></div><div class="gmail_msg" style="border: 0px; color: #1c2024; font-size: 14px; font-stretch: inherit; font-variant-numeric: inherit; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;"><span class="gmail_msg" style="background-color: white; font-family: "garamond" , serif;"><a class="gmail_msg" data-saferedirecturl="https://www.google.com/url?q=http://paulegre.free.fr/&source=gmail&ust=1475702617593000&usg=AFQjCNFjcHfMvpewLIFitoAIA4KVpTOqBg" href="http://paulegre.free.fr/" style="border: 0px; color: #2c67d1; font-size: inherit; font-stretch: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; position: relative; text-decoration: none; vertical-align: baseline; z-index: 0;" target="_blank">Paul EGRÉ</a><span class="m_3414633616540520217inbox-inbox-Apple-converted-space gmail_msg"> </span>(Institut Jean Nicod, Paris)</span></div><span class="gmail_msg" style="background-color: white; font-family: "garamond" , serif;">Please visit<span class="m_3414633616540520217inbox-inbox-Apple-converted-space gmail_msg"> </span><a class="gmail_msg" data-saferedirecturl="https://www.google.com/url?q=http://www.llc.unito.it/notizie/fifth-reasoning-club-meeting-llc-2017&source=gmail&ust=1475702617593000&usg=AFQjCNEhaR6mWnFRUM4O8OoGj5ScRuw9XA" href="http://www.llc.unito.it/notizie/fifth-reasoning-club-meeting-llc-2017" style="color: #7e57c2; position: relative; z-index: 0;" target="_blank">http://www.llc.unito.it/<wbr></wbr>notizie/fifth-reasoning-club-<wbr></wbr>meeting-llc-2017</a><span class="m_3414633616540520217inbox-inbox-Apple-converted-space gmail_msg"> </span>for further information.<br class="gmail_msg" /><br class="gmail_msg" />Submissions for the<span class="m_3414633616540520217inbox-inbox-m_6107395369616152815inbox-inbox-Apple-converted-space gmail_msg"> </span>Fifth Reasoning Club Conference<span class="m_3414633616540520217inbox-inbox-m_6107395369616152815inbox-inbox-Apple-converted-space gmail_msg"> </span>are now open.<span class="m_3414633616540520217inbox-inbox-m_6107395369616152815inbox-inbox-Apple-converted-space gmail_msg"> </span>All PhD candidates and early career researchers with interests in reasoning and inference, broadly construed, are encouraged to submit an abstract of up to 500 words (prepared for blind review) via Easy Chair at<span class="m_3414633616540520217inbox-inbox-Apple-converted-space gmail_msg"> </span><a class="gmail_msg" data-saferedirecturl="https://www.google.com/url?q=https://easychair.org/conferences/?conf%3Drcc17&source=gmail&ust=1475702617593000&usg=AFQjCNFXCFfJRWrHU-qka0X6NSZ1HkkjUQ" href="https://easychair.org/conferences/?conf=rcc17" style="color: #7e57c2; position: relative; z-index: 0;" target="_blank">https://easychair.org/<wbr></wbr>conferences/?conf=rcc17</a></span></div><div class="gmail_msg" style="color: #212121; font-family: "helvetica neue", helvetica, arial, sans-serif; font-size: 13px;"><span class="gmail_msg" style="background-color: white; font-family: "garamond" , serif;"><span class="m_3414633616540520217inbox-inbox-m_6107395369616152815inbox-inbox-Apple-converted-space gmail_msg"> </span><br class="gmail_msg" />We especially welcome members of groups that are underrepresented in philosophy to submit. We are committed to promoting diversity in our final programme.</span></div><div class="gmail_msg" style="color: #212121; font-family: "helvetica neue", helvetica, arial, sans-serif; font-size: 13px;"><span class="gmail_msg" style="background-color: white; font-family: "garamond" , serif;"><br /></span><span class="gmail_msg" style="background-color: white; font-family: "garamond" , serif;">The deadline for submissions is<span class="m_3414633616540520217inbox-inbox-m_6107395369616152815inbox-inbox-Apple-converted-space gmail_msg"> </span><b class="gmail_msg">1 February 2017</b>. The final decision on submissions will be made by 15 March 2017.<span class="m_3414633616540520217inbox-inbox-m_6107395369616152815inbox-inbox-Apple-converted-space gmail_msg"> </span><br class="gmail_msg" /><br class="gmail_msg" />Grants will be available to help cover travel costs for contributed speakers. To apply for a travel grant, please send a CV and a short travel budget estimate in a single pdf file by 1 February 2017 to<span class="m_3414633616540520217inbox-inbox-m_6107395369616152815inbox-inbox-Apple-converted-space gmail_msg"> </span><a class="gmail_msg" href="mailto:reasoningclubconference2017@gmail.com" style="color: #7e57c2; position: relative; z-index: 0;" target="_blank">reasoningclubconference2017<wbr></wbr>@gmail.com</a>.</span></div><div class="gmail_msg" style="color: #212121; font-family: "helvetica neue", helvetica, arial, sans-serif; font-size: 13px;"><span class="gmail_msg" style="background-color: white; font-family: "garamond" , serif;"><br /></span><span class="gmail_msg" style="background-color: white; font-family: "garamond" , serif;">For any queries please contact<span class="m_3414633616540520217inbox-inbox-m_6107395369616152815inbox-inbox-Apple-converted-space gmail_msg"> </span><a class="gmail_msg" href="mailto:vcrupi@unito.it" style="color: #7e57c2; position: relative; z-index: 0;" target="_blank"><span class="m_3414633616540520217inbox-inbox-m_6107395369616152815inbox-inbox-lG gmail_msg" style="outline: transparent dashed 1px;"><span class="m_3414633616540520217inbox-inbox-lG gmail_msg" style="outline: transparent dashed 1px;">Vincenzo</span></span><span class="m_3414633616540520217inbox-inbox-m_6107395369616152815inbox-inbox-Apple-converted-space gmail_msg"> </span><span class="m_3414633616540520217inbox-inbox-m_6107395369616152815inbox-inbox-lG gmail_msg" style="outline: transparent dashed 1px;"><span class="m_3414633616540520217inbox-inbox-lG gmail_msg" style="outline: transparent dashed 1px;">Crupi</span></span></a><span class="m_3414633616540520217inbox-inbox-m_6107395369616152815inbox-inbox-Apple-converted-space gmail_msg"> </span>or<span class="m_3414633616540520217inbox-inbox-m_6107395369616152815inbox-inbox-Apple-converted-space gmail_msg"> </span><a class="gmail_msg" href="mailto:jpkonek@gmail.com" style="color: #7e57c2; position: relative; z-index: 0;" target="_blank">Jaso<wbr></wbr>n Konek</a>.</span></div><div class="gmail_msg" style="color: #212121; font-family: "helvetica neue", helvetica, arial, sans-serif; font-size: 13px;"><span class="gmail_msg" style="background-color: white; font-family: "garamond" , serif;"><br class="gmail_msg" /></span></div><div class="gmail_msg" style="color: #212121; font-family: "helvetica neue", helvetica, arial, sans-serif; font-size: 13px;"><div class="gmail_msg" style="border: 0px; color: #1c2024; font-size: 14px; font-stretch: inherit; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;"><span class="gmail_msg" style="background-color: white; font-family: "garamond" , serif;">The<span class="m_3414633616540520217inbox-inbox-Apple-converted-space gmail_msg"> </span><a class="gmail_msg" data-saferedirecturl="https://www.google.com/url?q=https://www.kent.ac.uk/secl/researchcentres/reasoning/club/index.html&source=gmail&ust=1475702617593000&usg=AFQjCNE03BNARO81UcWvbvwX6utNeNxKeg" href="https://www.kent.ac.uk/secl/researchcentres/reasoning/club/index.html" style="border: 0px; color: #2c67d1; font-size: inherit; font-stretch: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; position: relative; text-decoration: none; vertical-align: baseline; z-index: 0;" target="_blank">Reasoning Club</a><span class="m_3414633616540520217inbox-inbox-Apple-converted-space gmail_msg"> </span>is a network of institutes, centres, departments, and groups addressing research topics connected to reasoning, inference, and methodology broadly construed. It issues the monthly gazette<span class="m_3414633616540520217inbox-inbox-Apple-converted-space gmail_msg"> </span><a class="gmail_msg" data-saferedirecturl="https://www.google.com/url?q=http://blogs.kent.ac.uk/thereasoner/about/&source=gmail&ust=1475702617593000&usg=AFQjCNEe8S-E3RtygHYOKsqSYpZCGJs4Sg" href="http://blogs.kent.ac.uk/thereasoner/about/" style="border: 0px; color: #2c67d1; font-size: inherit; font-stretch: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; position: relative; text-decoration: none; vertical-align: baseline; z-index: 0;" target="_blank">The Reasoner</a>.</span></div><div class="gmail_msg" style="border: 0px; color: #1c2024; font-size: 14px; font-stretch: inherit; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;"><span style="background-color: white;"><br /></span></div><div class="gmail_msg" style="border: 0px; color: #1c2024; font-size: 14px; font-stretch: inherit; line-height: inherit; outline: 0px; padding: 0px; vertical-align: baseline;"><span class="gmail_msg" style="background-color: white; font-family: "garamond" , serif;">Earlier editions of the meeting were held in<span class="m_3414633616540520217inbox-inbox-Apple-converted-space gmail_msg"> </span><a class="gmail_msg" data-saferedirecturl="https://www.google.com/url?q=http://www.vub.ac.be/CLWF/RC2012/&source=gmail&ust=1475702617593000&usg=AFQjCNGlXCWL4mxqpkTvgZvR9pYj2vOjSA" href="http://www.vub.ac.be/CLWF/RC2012/" style="border: 0px; color: #2c67d1; font-size: inherit; font-stretch: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; position: relative; text-decoration: none; vertical-align: baseline; z-index: 0;" target="_blank">Brussels</a>,<span class="m_3414633616540520217inbox-inbox-Apple-converted-space gmail_msg"> </span><a class="gmail_msg" data-saferedirecturl="https://www.google.com/url?q=http://reasoningclubpisa.weebly.com/&source=gmail&ust=1475702617593000&usg=AFQjCNHEPqPbksj_q-JYtKylOepi_I0IjA" href="http://reasoningclubpisa.weebly.com/" style="border: 0px; color: #2c67d1; font-size: inherit; font-stretch: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; position: relative; text-decoration: none; vertical-align: baseline; z-index: 0;" target="_blank">Pisa</a>,<span class="m_3414633616540520217inbox-inbox-Apple-converted-space gmail_msg"> </span><a class="gmail_msg" data-saferedirecturl="https://www.google.com/url?q=https://reasoningclubkent.wordpress.com/&source=gmail&ust=1475702617594000&usg=AFQjCNFfz-I220zZzEnHzNWF6VYpInl2mw" href="https://reasoningclubkent.wordpress.com/" style="border: 0px; color: #2c67d1; font-size: inherit; font-stretch: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; position: relative; text-decoration: none; vertical-align: baseline; z-index: 0;" target="_blank">Kent</a>, and<span class="m_3414633616540520217inbox-inbox-Apple-converted-space gmail_msg"> </span><a class="gmail_msg" data-saferedirecturl="https://www.google.com/url?q=http://www.maths.manchester.ac.uk/news-and-events/events/fourth-reasoning-club-conf/&source=gmail&ust=1475702617594000&usg=AFQjCNFQ6h2JKlmcZZXsaM2CF9besNLimA" href="http://www.maths.manchester.ac.uk/news-and-events/events/fourth-reasoning-club-conf/" style="border: 0px; color: #2c67d1; font-size: inherit; font-stretch: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin: 0px; outline: 0px; padding: 0px; position: relative; text-decoration: none; vertical-align: baseline; z-index: 0;" target="_blank">Manchester</a>. </span></div></div>Jason Konekhttp://www.blogger.com/profile/01750769966011528630noreply@blogger.com0