tag:blogger.com,1999:blog-4987609114415205593.post8760033797581946359..comments2021-06-19T18:51:04.369+01:00Comments on M-Phi: Deferring to rationality -- does it preclude permissivism?Jeffrey Ketlandhttp://www.blogger.com/profile/01753975411670884721noreply@blogger.comBlogger2125tag:blogger.com,1999:blog-4987609114415205593.post-14376691855721189512020-12-30T08:31:35.756+00:002020-12-30T08:31:35.756+00:00Thank you for expressing my worry far better than ...Thank you for expressing my worry far better than I could.TruePathhttps://www.blogger.com/profile/00124043164362758796noreply@blogger.comtag:blogger.com,1999:blog-4987609114415205593.post-16791373542188855302020-12-29T15:05:38.045+00:002020-12-29T15:05:38.045+00:00This is a terrific and fascinating post, Richard! ...This is a terrific and fascinating post, Richard! I'm 100% with you in endorsing permissivism about rational credences. But actually I think that Ben Levinstein's principle "Deference to Rationality" (DtR) has an even more radical flaw than the one that you identify.<br /><br />Broadly speaking, the problem with this principle is akin to the one that Julia Staffel identified in "Should I pretend I'm perfect?". It a principle that focuses on the case of an agent who has a grossly irrational credence function c, and then tries to specify what rationality requires of the agent in question, by imposing conditions on c itself. We just shouldn't expect there to be any true satisfiable principles of this sort.<br /><br />The problem can be revealed particularly clearly if we assume my account of the meaning of the operator 'Rationality requires of you at t that...'. According to my account, this operator is equivalent to 'At all the relevantly available worlds at which your credences are perfectly rational at t,...'. Crucially, all the "relevantly available" worlds need to be exactly like the actual world with respect to what determines which credence functions are rational for you at t and which are not (and the degree to which these functions are irrational).<br /><br />So, now consider a proposed principle of the form 'If at t you have the grossly irrational credence function c, then rationality requires of you at t that...'.<br /><br />Clearly, if this principle has any chance of being true, then it must imply that part of what rationality requires of you in this case is precisely that you do not have c. Instead, what rationality requires of you in this case is that you have a different credence function c' instead - perhaps one of the closest fully rational credence functions to c, but certainly not c itself.<br /><br />However, instead of characterizing any of these alternative credence functions that it is rational for you to have in this case, DtR just imposes a condition on this irrational credence function c - even though c is not the credence function that you have at any of the relevantly available worlds at which you are perfectly rational at t. But there is absolutely no reason to think that this irrational credence function meets this condition. Indeed, we can just stipulate that it doesn't. So, DtR is obviously false.<br /><br />Admittedly, there is also another interpretation of Levinstein's principle. On this second interpretation, the principle has the form 'If c is irrational, then at any available world at which c is rational...' But as I noted above, all the "relevantly available" worlds need to be exactly like the actual world with respect to what determines which credence functions are rational and which are not. So, if c is actually irrational, there are no relevantly available worlds where c is perfectly rational. Thus, on this interpretation, DtR is utterly vacuous, just like a universally quantified statement of the form 'At all mathematically possible worlds where 0=1,...' Obviously, we should not expect such a principle to be, as you put it, "satisfiable"!Ralph Wedgwoodhttps://www.blogger.com/profile/02556523594801080275noreply@blogger.com