tag:blogger.com,1999:blog-4987609114415205593.post1221913638597765034..comments2024-03-28T13:40:26.497+00:00Comments on M-Phi: Michael Caie's 'Calibration and Probabilism' (Guest post by Anna Mahtani)Jeffrey Ketlandhttp://www.blogger.com/profile/01753975411670884721noreply@blogger.comBlogger3125tag:blogger.com,1999:blog-4987609114415205593.post-14426739096530444912016-05-19T00:06:15.578+01:002016-05-19T00:06:15.578+01:00Wonderful post and this article tell me Michal how...Wonderful post and this article tell me Michal how to solve his math question and solving method also helped him in practical thanks for share it <a href="http://www.studyabroadsop.com/study-abroad-statement-of-purpose-writing-services/study-abroad-recommendation-letter-writing-service/" rel="nofollow">study abroad recommendation letter</a> .Allen jeleyhttps://www.blogger.com/profile/10312119051975318074noreply@blogger.comtag:blogger.com,1999:blog-4987609114415205593.post-23208139886205809742014-05-29T18:24:19.689+01:002014-05-29T18:24:19.689+01:00It's worth noting that similar issues arise fo...It's worth noting that similar issues arise for Joyce's Accuracy-Dominance argument and the Dutch-Book argument.<br />In both cases, it is quite tempting to appeal to some putative doxastic goal, or at least, good-making feature.<br />In the former case, it is tempting to appeal to the goal of representing the world accurately, in the latter it is tempting to appeal to the good-making feature of having credences which preclude monetary loss.<br />In both cases, however, one can show, for reasons that parallel those outlined in ``Calibration and Probabilism", that pursuit of these goals/good-making features is sometimes best served by being probabilistically incoherent.<br />(I consider these cases in a paper called ``Rational Probabilistic Incoherence" in Phil Review 2013)<br />And in both cases there is an available move which parallels the move you suggest at the end of your post.<br />Instead of looking at how accurate a credal state would be were it to be yours, or what monetary exploitation is possible given that a credal state is yours, we simply consider the credal states in the abstract.<br />And once we abstract away from limitations imposed given that the credal state is held by some particular individual, the arguments for Probabilism can proceed.<br />But, again, it seems to me that there is a challenge here to say why we should care about credal states being accurate or not leading to monetary exploitation unless accurate representation/non-exploitation is a doxastic goal or good-making feature.<br />And, again, I'm not sure what a plausible story here would look like.<br /><br />The general issue that you raise, then, does seem to me to be important. <br />Whether a number of the best known attempts to justify Probabilism are in good standing turns, in part, on whether the type of move that you suggest is ultimately viable.<br /><br />Finally, I should note that there are two papers at this years Formal Epistemology Workshop (``Coherence or Accuracy" by Jennifer Carr, and ``The Foundations of Epistemic Decision Theory", by Ben Levinstein and Jason Konek) that explore issues that are related to this question. <br />And recent papers that deal with related issues are ``Epistemic Decision Theory" by Hilary Greaves in Mind, and ``Epistemic Teleology and the Separateness of Propositions" by Selim Berker in Phil Review.Anonymoushttps://www.blogger.com/profile/00031738102368449241noreply@blogger.comtag:blogger.com,1999:blog-4987609114415205593.post-50626610511533116342014-05-29T18:23:18.036+01:002014-05-29T18:23:18.036+01:00Hi Anna, thanks for the blog post! Here are a few ...Hi Anna, thanks for the blog post! Here are a few thoughts in response.<br /><br />As you note, in the paper I argue that there are cases in which, for some agent $S$, some algebra $\mathcal{A}$, and some $p > 0$, given that $S$ has a credal state defined over $\mathcal{A}$, it is impossible for $S$ to be calibrated to within $p$.<br />If, then, one endorses the principle (which van Fraassen seems to endorse) that says that it is irrational for an agent to have a credal state if it is not possible for the agent to have that credal state and be calibratable to within $\epsilon$ (for any value $\epsilon > 0$), then it follows that if $S$ has a credal state defined over $\mathcal{A}$ then she is doomed to irrationality.<br />This, however, conflicts with a plausible ought-implies-can principle.<br />And so, I argue, we should reject the claim that it is irrational for an agent to have a credal state if it is impossible for her to have that credal state and be appropriately calibrated.<br />Suppose, however, that credal states, in some sense, constitutively aim at being close to relative frequencies.<br />Is there some alternative normative principle that one could endorse that isn't incompatible with ought-implies-can?<br />In the paper, I argue that the best thing for a frequentist to say (roughly) is that in those cases in which calibration is precluded for an agent, the agent ought to have a credal state that comes as close as possible to matching the limiting relative frequencies.<br />Interestingly, though, in certain cases the credal state with this property will be probabilistically incoherent.<br /><br />Another response, however, to the initial problem, which is suggested at the end of your post, is to link the rationality of a credal function $C(\cdot)$ for some agent $S$ not to whether or not it is possible for the agent to have that credal state and be calibrated, but just to whether that credal state itself, considered independently, can be such that there are limiting relative frequencies with which it lines up.<br />If one says this, then it follows that one ought to have probabilistically coherent credences.<br /><br />I think that this is an interesting option, and is, indeed, the best response if one wants to try to salvage the calibration argument for Probabilism.<br />My worry about this option is that it's hard for me to see why we should care about whether or not a credal state itself is calibratable, unless we take calibratability to be a \textit{goal}, something that we ought to strive for.<br />But if we think that calibratability is a goal, then what seems to be really normatively relevant in assessing the rationality of a possible credal state $C(\cdot)$ for some agent $S$, is how calibrated $C(\cdot)$ could be were it to be $S$'s credal state, not how calibratable $C(\cdot)$ is in principle.<br />Obviously, this is far from decisive.<br />But there is, I think, a challenge here to say why calibratability in principle is relevant for assessing the rationality of credal state without appealing to the idea of calibratability being something that our credal states should aim for. <br />I, at least, am not certain what that story should look like.<br /><br />As you note, the cases I consider are similar in certain respects to Moore paradoxical cases.<br />Such propositions are consistent, however, for certain agents, they cannot be truly believed.<br />I'm inclined to think that the Moore paradoxical cases in fact support the idea that what's relevant for assessing the rationality of some doxastic state for an agent $S$ is not whether or not that state may in principle have some particular feature, but whether it can have that feature given that the agent has the doxastic state in question.<br />After all, it does seem to be irrational for an agent to believe a proposition that is Moore paradoxical for them.<br />But that isn't because the proposition can't be true, or because it is impossible for a belief in the relevant proposition to be true.<br />It would seem instead to be irrational because that particular agent cannot truly believe the proposition in question.<br /><br />CONTINUED BELOW...<br />Anonymoushttps://www.blogger.com/profile/00031738102368449241noreply@blogger.com