Deferring to rationality -- does it preclude permissivism?

Permissivism about epistemic rationality is the view that there are bodies of evidence in response to which rationality permits a number of different doxastic attitudes. I'll be thinking here about the case of credences. Credal permissivism says: there are bodies of evidence in response to which rationality permits a number of different credence functions.

Over the past year, I've watched friends on social media adopt remarkably different credence functions based on the same information about aspects of the COVID-19 pandemic, the outcome of the US election, and the withdrawal of the UK from European Union. And while I watch them scream at each other, cajole each other, and sometimes simply ignore each other, I can't shake the feeling that they are all taking rational stances. While they disagree dramatically, and while some will end up closer to the truth than others when it is finally revealed, it seems to me that all are responding rationally to their shared evidence, their opponents' protestations to the contrary. So permissivism is a very timely epistemic puzzle for 2020. What's more, this wonderful piece by Rachel Fraser made me see how my own William James-inspired approach to epistemology connects with a central motivation for believing in conspiracy theories, another major theme of this unloveable year.

One type of argument against credal permissivism turns on the claim that rationality is worthy of deference. The argument begins with a precise version of this claim, stated as a norm that governs credences. It proceeds by showing that, if epistemic rationality is permissive, then it is sometimes impossible to meet the demands of this norm. Taking this to be a reductio, the argument concludes that rationality cannot be permissive. I know of two versions of the argument, one due to Daniel Greco and Brian Hedden, and one due to Ben Levinstein. I'll mainly consider Levinstein's, since it fixes some problems with Greco and Hedden's. I'll consider David Thorstad's response to Greco and Hedden's argument, which would also work against Levinstein's argument were it to work at all. But I'll conclude that, while it provides a crucial insight, it doesn't quite work, and I'll offer my own alternative response.

Roughly speaking, you defer to someone on an issue if, upon learning their attitude to that issue, you adopt it as your own. So, for instance, if you ask me what I'd like to eat for dinner tonight, and I say that I defer to you on that issue, I'm saying that I will want to eat whatever I learn you would like to eat. That's a case of deferring to someone else's preferences---it's a case where we defer conatively to them. Here, we are interested in cases in which we defer to someone else's beliefs---that is, where we defer doxastically to them. Thus, I defer doxastically to my radiographer on the issue of whether I've got a broken finger if I commit to adopting whatever credence they announce in that diagnosis. By analogy, we sometimes say that we defer doxastically to a feature of the world if we commit to setting our credence in some way that is determined by that feature of the world. Thus, I might defer doxastically to a particular computer simulation model of sea level change on the issue of sea level rise by 2030 if I commit to setting my credence in a rise of 10cm to whatever probability that model reports when I run it repeatedly while perturbing its parameters and initial conditions slightly around my best estimate of their true values.

In philosophy, there are a handful of well-known theses that turn on the claim that we are required to defer doxastically to this individual or that feature of the world---and we're required to do it on all matters. For instance, van Fraassen's Reflection Principle says that you should defer doxastically to your future self on all matters. That is, for any proposition $X$, conditional on your future self having credence $r$ in $X$, you should have credence $r$ in $X$. In symbols:$$c(X\, |\, \text{my credence in $X$ at future time $t$ is $r$}) = r$$And the Principal Principle says that you should defer to the objective chances on all doxastic matters by setting your credences to match the probabilities that they report. That is, for any proposition $X$, conditional on the objective chance of $X$ being $r$, you should have credence $r$ in $X$. In symbols:$$c(X\, |\, \text{the objective chance of $X$ now is $r$}) = r$$Notice that, in both cases, there is a single expert value to which you defer on the matter in question. At time $t$, you have exactly one credence in $X$, and the Reflection Principle says that, upon learning that single value, you should set your credence in $X$ to it. And there is exactly one objective chance of $X$ now, and the Principal Principle says that, upon learning it, you should set your credence in $X$ equal to it. You might be uncertain about what that single value is, but it is fixed and unique. So this account of deference does not cover cases in which there is more than one expert. For instance, it doesn't obviously apply if I defer not to a specific climate model, but to a group of them. In those cases, there is usually no fixed, unique value that is the credence they all assign to a proposition. So principles of the same form as the Reflection or Principal Principle do not say what to do if you learn one of those values, or some of them, or all of them. This problem lies at the heart of the deference argument against permissivism. Those who make the argument think that deference to groups should work in one way; those who defend permissivism against it think it should work in some different way.

As I mentioned above, the deference argument begins with a specific, precise norm that is said to govern the deference we should show to rationality. The argument continues by claiming that, if rationality is permissive, then it is not possible to satisfy this norm. Here is the norm as Levinstein states it, where $c \in R_E$ means that $c$ is in the set $R_E$ of rational responses to evidence $E$:

Deference to Rationality Suppose:

  1. $c$ is your credence function;
  2. $E$ is your total evidence;
  3. $c(c \in R_E) = 0$;
  4. $c'$ is a probabilistic credence function;
  5. $c(c' \in R_E) > 0$;

then rationality requires$$c(-|c' \in R_E) = c'(-|c' \in R_E)$$That is, if you are certain that your credence function is not a rational response to your total evidence, then, conditional on some alternative probabilistic credence function being a rational response to that evidence, you should set your credences in line with that alternative once you've brought it up to speed with your new evidence that it is a rational response to your original total evidence.

Notice, first, that Levinstein's principle is quite weak. It does not say of just anyone that they should defer to rationality. It says only that, if you are in the dire situation of being certain that you are yourself irrational, then you should defer to rationality. If you are sure you're irrational, then your conditional credences should be such that, were you to learn of a credence function that it's a rational response to your evidence, you should fall in line with the credences that it assigns conditional on that same assumption that it is rational. Restricting its scope in this way makes it more palatable to permissivists who will typically not think that someone who is already pretty sure that they are rational must switch credences when they learn that there are alternative rational responses out there.

Notice also that you need only show such deference to rational credence functions that satisfy the probability axioms. This restriction is essential, for otherwise (DtR) will force you to violate the probability axioms yourself. After all, if $c(-)$ is probabilistic, then so is $c(-|X)$ for any $X$ with $c(X) > 0$. Thus, if $c'(-|c' \in R_E)$ is not probabilistic, and $c$ defers to $c'$ in the way Levinstein describes, then $c(-|c' \in R_E)$ is not probabilistic, and thus neither is $c$.

Now, suppose:

  • $c$ is your credence function;
  • $E$ is your total evidence;
  • $c'$ and $c''$ are probabilistic credence functions with$$c'(-|c' \in R_E\ \&\ c'' \in R_E) \neq c''(-|c' \in R_E\ \&\ c'' \in R_E)$$That is, $c'$ and $c''$ are distinct and remain distinct even once they become aware that both are rational responses to $E$;
  • $c(c' \in R_E\ \&\ c'' \in R_E) > 0$. That is, you give some credence to both of them being rational responses to $E$;
  • $c(c \in R_E) = 0$. That is, you are certain that your own credence function is not a rational response to $E$.

Then, by (DtR),

  • $c(-|c' \in R_E) = c'(-|c' \in R_E)$
  • $c(-|c'' \in R_E) = c''(-|c'' \in R_E)$ 

Thus, conditioning both sides of the first identity on $c'' \in R_E$ and both sides of the second identity on $c' \in R_E$, we obtain

  • $c(-|c' \in R_E\ \&\ c'' \in R_E) = c'(-|c' \in R_E\ \&\ c'' \in R_E)$ 
  • $c(-|c'' \in R_E\ \&\ c' \in R_E) = c''(-|c' \in R_E\ \&\ c'' \in R_E)$

But, by assumption, $c'(-| c' \in R_E\ \&\ c'' \in R_E) \neq c''(-|c' \in R_E\ \&\ c'' \in R_E)$. So (DtR) cannot be satisfied.

One thing to note about this argument: if it works, it establishes not only that there can be no two different rational responses to the same evidence, but that it is irrational to be anything less than certain of this. After all, what is required to derive the contradiction from DtR is not that there are two probabilistic credence functions $c'$ and $c''$ such that $c'(-|c' \in R_E\ \&\ c'' \in R_E) \neq c''(-|c' \in R_E\ \&\ c'' \in R_E)$ that are both rational responses to $E$. Rather, what is required is only that there are two probabilistic credence functions $c'$ and $c''$ with $c'(-|c' \in R_E\ \&\ c'' \in R_E) \neq c''(-|c' \in R_E\ \&\ c'' \in R_E)$ that you think might both be rational responses to $E$---that is, $c(c' \in R_E\ \&\ c'' \in R_E) > 0$. The conclusion that it is irrational to even entertain permissivism strikes me as too strong, but perhaps those who reject permissivism will be happy to accept it.

Let's turn, then, to a more substantial worry, given compelling voice by David Thorstad: (DtR) is too strong because the deontic modality that features in it is too strong. As I hinted above, the point is that the form of the deference principles that Greco & Hedden and Levinstein use is borrowed from cases---such as the Reflection Principle and the Principal Principle---in which there is just one expert value, though it might be unknown to you. In those cases, it is appropriate to say that, upon learning the single value and nothing more, you are required to set your credence in line with it. But, unless we simply beg the question against permissivism and assume there is a single rational response to every body of evidence, this isn't our situation. Rather, it's more like the case where you defer to a group of experts, such as a group of climate models. And in this case, Thorstad says, it is inappropriate to demand that you set your credence in line with an expert's credence when you learn what it is. Rather, it is at most appropriate to permit you to do that. That is, Levinstein's principle should not say that rationality requires your credence function to assign the conditional credences stated in its consequent; it should say instead that rationality allows it.

Thorstad motivates his claim by drawing an analogy with a moral case that he describes. Suppose you see two people drowning. They're called John and James, and you know that you will be able to save at most one. So the actions available to you are: save John, save James, save neither. And the moral actions are: save John, save James. But now consider a deference principle governing this situation that is analogous to (DtR): it demands that, upon learning that it is moral to save James, you must do that; and upon learning that it is moral to save John, you must do that. From this, we can derive a contradiction in a manner somewhat analogous to that in which we derived the contradiction from (DtR) above: if you learn both that it is moral to save John and moral to save James, you should do both; but that isn't an available action; so moral permissivism must be false. But I take it no moral theory will tolerate that in this case. So, Thorstad argues, there must be something wrong with the moral deference principle; and, by analogy, there must be something wrong with the analogous doxastic principle (DtR).

Thorstad's diagnosis is this: the correct deference principle in the moral case should say: upon learning that it is moral to save James, you may do that; upon learning that it is moral to save John, you may do that. You thereby avoid the contradiction, and moral permissivism is safe. Similarly, the correct doxastic deference principle is this: upon learning that a credence function is rational, it is permissible to defer to it. In Levinstein's framework, the following is rationally permissible, not rationally mandated:$$c(-|c' \in R_E) = c'(-|c' \in R_E)$$

I think Thorstad's example is extremely illuminating, but for reasons rather different from his. Recall that a crucial feature of Levinstein's version of the deference argument against permissivism is that it applies only to people who are certain that their current credences are irrational. If we add the analogous assumption to Thorstad's case, his verdict is less compelling. Suppose, for instance, you are currently committed to saving neither John nor James from drowning; that's what you plan to do; it's the action you have formed an intention to perform. What's more, you're certain that this action is not moral. But you're uncertain whether either of the other two available actions are moral. And let's add a further twist to drive home the point. Suppose, furthermore, that you are certain that you are just about learn, of exactly one of them, that it is permissible. And add to that the fact that, immediately after you learn, of exactly one of them, that it is moral, you must act---failing to do so will leave both John and James to drown. In this case, I think, it's quite reasonable to say that, upon learning that saving James is permissible, you are not only morally permitted to drop your intention to save neither and replace it with the intention to save James, but you are also morally required to do so; and the same should you learn that it is permissible to save John. It would, I think, be impermissible to save neither, since you're certain that's immoral and you know of an alternative that is moral; and it would be impermissible to save John, since you are still uncertain about the moral status of that action, while you are certain that saving James is moral; and it would be morally required to save James, since you are certain of that action alone that it is moral. Now, Levinstein's principle might seem to holds for individuals in an analogous situation. Suppose you're certain that your current credences are irrational. And suppose you will learn of only one credence function that it is rationally permissible. At least in this situation, it might seem that it is rationally required that you adopt the credence function you learn is rationally permissible, just as you are morally required to perform the single act you learn is moral. So, is Levinstein's argument rehabilitated?

I think not. Thorstad's example is useful, but not because the case of rationality and morality are analogous; rather, precisely because it draws attention to the fact that they are disanalogous. After all, all moral actions are better than all immoral ones. So, if you are committed to an action you know is immoral, and you learn of another that it is moral, and you know you'll learn nothing more about morality, you must commit to perform the action you've learned is moral. Doing so is the only way you know how to improve the action you'll perform for sure. But this is not the case for rational attitudes. It is not the case that all rational attitudes are better than all irrational attitudes. Let's see a few examples.

Suppose my preferences over a set of acts $a_1, \ldots, a_N$ are as follows, where $N$ is some very large number:$$a_1 \prec a_2 \prec a_3 \prec \ldots \prec a_{N-3} \prec a_{N-2} \prec a_{N-1} \prec a_N \prec a_{N-2}$$This is irrational, because, if the ordering is irreflexive, then it is not transitive: $a_{N-2} \prec a_{N-1} \prec a_N \prec a_{N-2}$, but $a_{N-2} \not \prec a_{N-2}$. And suppose I learn that the following preferences are rational:$$a_1 \succ a_2 \succ a_3 \succ \ldots \succ a_{N-3} \succ a_{N-2} \succ a_{N-1} \succ a_N$$Then surely it is not rationally required of me to adopt these alternative preferences. (Indeed, it seems to me that rationality might even prohibit me from transitioning from the first irrational set to the second rational set, but I don't need that stronger claim.) In the end, my original preferences are irrational because of a small, localised flaw. But they nonetheless express coherent opinions about a lot of comparisons. And, concerning all of those comparisons, the alternative preferences take exactly the opposite view. Moving to the latter in order to avoid having preferences that are flawed in the way that the original set are flawed does not seem rationally required, and indeed might seem irrational.

Something similar happens in the credal case, at least according to the accuracy-first epistemologist. Suppose I have credence $0.1$ in $X$ and $1$ in $\overline{X}$. And suppose the single legitimate measure of inaccuracy is the Brier score. I don't know this, but I do know a few things: first, I know that accuracy is the only fundamental epistemic value, and I know that a credence function's accuracy scores at different possible worlds determine its rationality at this world; furthermore, I know that my credences are accuracy dominated and therefore irrational, but I don't know what dominates them. Now suppose I learn that the following credences are rational: $0.95$ in $X$ and $0.05$ in $\overline{X}$. It seems that I am not required to adopt these credences (and, again, it seems that I am not even rationally permitted to do so, though again this latter claim is stronger than I need). While my old credences are irrational, they do nonetheless encode something like a point of view. And, from that point of view, the alternative credences look much much worse than staying put. While I know that mine are irrational and accuracy dominated, though I don't know what by, I also know that, from my current, slightly incoherent point of view, the rational ones look a lot less accurate than mine. And indeed they will be much less accurate than mine if $X$ turns out to be false.

So, even in the situation in which Levinstein's principle is most compelling, namely, when you are certain you're irrational and you will learn of only one credence function that it is rational, still it doesn't hold. It is possible to be sure that your credence function is an irrational response to your evidence, sure that an alternative is a rational response, and yet not be required to adopt the alternative because learning that the alternative is rational does not teach you that it's better than your current irrational credence function for sure---it might be much worse. This is different from the moral case. So, as stated, Levinstein's principle is false.

However, to make the deference argument work, Levinstein's principle need only hold in a single case. Levinstein describes a family of cases---those in which you're certain you're irrational---and claims that it holds in all of those. Thorstad's objection shows that it doesn't. Responding on Levinstein's behalf, I narrowed the family of cases to avoid Thorstad's objection---perhaps Levinstein's principle holds when you're certain you're irrational and know you'll only learn of one credence function that it's rational. After all, the analogous moral principle holds in those cases. But we've just seen that the doxastic version doesn't always hold there, because learning that an alternative credence function is rational does not teach you that it is better than your irrational credence function in the way that learning an act is moral teaches you that it's better than the immoral act you intend to perform. But perhaps we can narrow the range of cases yet further to find one in which the principle does hold.

Suppose, for instance, you are certain you're irrational, you know you'll learn of just one credence function that it's rational, and moreover you know you'll learn that it is better than yours. Thus, in the accuracy-first framework, suppose you'll learn that it accuracy dominates you. Then surely Levinstein's principle holds here? And this would be sufficient for Levinstein's argument, since each non-probabilistic credence function is accuracy dominated by many different probabilistic credence functions; so we could find the distinct $c'$ and $c''$ we need for the reductio.

Not so fast, I think. How you should respond when you learn that $c'$ is rational depends on what else you think about what determines the rationality of a credence function. Suppose, for instance, you think that a credence function is rational just in case it is not accuracy dominated, but you don't know which are the legitimate measures of accuracy. Perhaps you think there is only one legitimate measure of accuracy, and you know it's either the Brier score---$\mathfrak{B}(c, i) = \sum_{X \in \mathcal{F}} |w_i(X) - c(X)|^2$---or the absolute value score---$\mathfrak{A}(c, i) = \sum_{X \in \mathcal{F}} |w_i(X) - c(X)|^2$---but you don't know which. And suppose your credence function is $c(X) = 0.1$ and $c(\overline{X}) = 1$, as above. Now you learn that $c'(X) = 0.05$ and $c'(\overline{X}) = 0.95$ is rational and an accuracy dominator. So you learn that $c'$ is more accurate than $c$ at all worlds, and, since $c'$ is rational, there is nothing that is more accurate than $c'$ at all worlds. Then you thereby learn that the Brier score is the only legitimate measure of accuracy. After all, according to the absolute value score, $c'$ does not accuracy dominate $c$; in fact, $c$ and $c'$ have exactly the same absolute value score at both worlds. You thereby learn that the credence functions that accuracy dominate you without themselves being accuracy dominated are those for which $c(X)$ lies strictly between the solution of $(1-x)^2 + (1-x)^2 = (1-0.05)^2 + (0-1)^2$ that lies in $[0, 1]$ and the solution of $(0-x)^2 + (1-(1-x))^2 = (0-0.05)^2 + (1-1)^2$ that lies in $[0, 1]$, and $c(\overline{X}) = 1 - c(X)$. You are then permitted to pick any one of them---they are all guaranteed to be better than yours. You are not obliged to pick $c'$ itself.

The crucial point is this: learning that $c'$ is rational teaches you something about the features of a credence function that determine whether it is rational---it teaches you that they render $c'$ rational! And that teaches you a bit about the set of rational credence functions---you learn it contains $c'$, of course, but you also learn other normative facts, such as the correct measure of inaccuracy, perhaps, or the correct decision principle to apply with the correct measure of inaccuracy to identify the rational credence functions. And learning those things may well shift your current credences, but you are not compelled to adopt $c'$.

Indeed, you might be compelled to adopt something other than $c'$. An example: suppose that, instead of learning that $c'$ is rational and accuracy dominates $c$, you learn that $c''$ is rational and accuracy dominates $c$, where $c''$ is a probability function that Brier dominates $c$, and $c'' \neq c'$. Then, as before, you learn that the Brier score and not the absolute value score is the correct measure of inaccuracy, and thereby learn the set of credence functions that accuracy dominates yours. Perhaps rationality then requires you to fix up your credence function so that it is rational, but in a way that minimizes the amount by which you change your current credences. How to measure this? Well, perhaps you're required to pick an undominated dominator $c^*$ such that the expected inaccuracy of $c$ from the point of view of $c^*$ is minimal. That is, you pick the credence function that dominates you and isn't itself dominated and which thinks most highly of your original credence function. Measuring accuracy using the Brier score, this turns out to be the credence function $c'$ described above. Thus, given this reasonable account of how to respond when you learn what the rational credence functions are, upon learning that $c''$ is rational, rationality then requires you to adopt $c'$.

In sum: For someone certain their credence function $c$ is irrational, learning only that $c'$ is rational is not enough to compel them to move to $c'$, nor indeed to change their credences at all, since they've no guarantee that doing so will improve their situation. To compel them to change their credences, you must teach them how to improve their epistemic situation. But when you teach them that doing a particular thing will improve their epistemic situation, that usually teaches them normative facts of which they were uncertain before---how to measure epistemic value, or the principles for choosing credences once you've fixed how to measure epistemic value---and doing that will typically teach them other ways to improve their epistemic situation besides the one you've explicitly taught them. Sometimes there will be nothing to tell between all the ways they've learned to improve their epistemic situation, and so all will be permissible, as Thorstad imagines; and sometimes there will be reason to pick just one of those ways, and so that will be mandated, even if epistemic rationality is permissive. In either case, Levinstein's argument does not go through. The deference principle on which it is based is not true.

Comments

  1. This is a terrific and fascinating post, Richard! I'm 100% with you in endorsing permissivism about rational credences. But actually I think that Ben Levinstein's principle "Deference to Rationality" (DtR) has an even more radical flaw than the one that you identify.

    Broadly speaking, the problem with this principle is akin to the one that Julia Staffel identified in "Should I pretend I'm perfect?". It a principle that focuses on the case of an agent who has a grossly irrational credence function c, and then tries to specify what rationality requires of the agent in question, by imposing conditions on c itself. We just shouldn't expect there to be any true satisfiable principles of this sort.

    The problem can be revealed particularly clearly if we assume my account of the meaning of the operator 'Rationality requires of you at t that...'. According to my account, this operator is equivalent to 'At all the relevantly available worlds at which your credences are perfectly rational at t,...'. Crucially, all the "relevantly available" worlds need to be exactly like the actual world with respect to what determines which credence functions are rational for you at t and which are not (and the degree to which these functions are irrational).

    So, now consider a proposed principle of the form 'If at t you have the grossly irrational credence function c, then rationality requires of you at t that...'.

    Clearly, if this principle has any chance of being true, then it must imply that part of what rationality requires of you in this case is precisely that you do not have c. Instead, what rationality requires of you in this case is that you have a different credence function c' instead - perhaps one of the closest fully rational credence functions to c, but certainly not c itself.

    However, instead of characterizing any of these alternative credence functions that it is rational for you to have in this case, DtR just imposes a condition on this irrational credence function c - even though c is not the credence function that you have at any of the relevantly available worlds at which you are perfectly rational at t. But there is absolutely no reason to think that this irrational credence function meets this condition. Indeed, we can just stipulate that it doesn't. So, DtR is obviously false.

    Admittedly, there is also another interpretation of Levinstein's principle. On this second interpretation, the principle has the form 'If c is irrational, then at any available world at which c is rational...' But as I noted above, all the "relevantly available" worlds need to be exactly like the actual world with respect to what determines which credence functions are rational and which are not. So, if c is actually irrational, there are no relevantly available worlds where c is perfectly rational. Thus, on this interpretation, DtR is utterly vacuous, just like a universally quantified statement of the form 'At all mathematically possible worlds where 0=1,...' Obviously, we should not expect such a principle to be, as you put it, "satisfiable"!

    ReplyDelete
    Replies
    1. Thank you for expressing my worry far better than I could.

      Delete
  2. HP Printers are among the most popular printers on the market, and they work well with PCs. There is no doubt that 123.hp.com Printers are a top choice for home users, small offices, and even business customers

    ReplyDelete

Post a Comment