How permissive is rationality? Horowitz's value question for moderate permissivism

Rationality is good; irrationality is bad. Most epistemologists would agree with this rather unnuanced take, regardless of their view of what exactly constitutes rationality and its complement. Granted this, a good test of a thesis in epistemology is whether it can explain why these two claims are true. Can it answer the value question: Why is rationality valuable and irrationality not? And indeed Sophie Horowitz gives an extremely illuminating appraisal of different degrees of epistemic permissivism and impermissivism by asking of each what answer it might give. Her conclusion is that the extreme permissivist -- played in her paper by the extreme subjective Bayesian, who thinks that satisfying Probabilism and being certain of your evidence is necessary and sufficient for rationality -- can give a satisfying answer to this question, or, at least, an answer that is satisfying from their own point of view. And the extreme impermissivist -- played here by the objective Bayesian, who thinks that rationality requires something like the maximum entropy distribution relative to your evidence -- can do so too. But, Horowitz argues, the moderate permissivist -- played by the moderate Bayesian, who thinks rationality imposes requirements more stringent than merely Probabilism, but who does not think they're stringent enough to pick out a unique credence function -- cannot. In this post, I'd like to raise some problems for Horowitz's assessment, and try to offer my own answer to the value question on behalf of the moderate Bayesian. (Full disclosure: If I'm honest, I think I lean towards extreme permissivism, but I'd like to show that moderate permissivism can defend itself against Horowitz's objection.)

Let's begin with the accounts that Horowitz gives on behalf of the extreme permissivist and the impermissivist.

The extreme permissivist -- the extreme subjective Bayesian, recall -- can say that only by being rational can you have a credence function that is immodest -- where a credence function is immodest if it uniquely maximizes expected epistemic utility from its own point of view. This is because Horowitz, like others in the epistemic utility theory literature, assume that epistemic utility is measured by strictly proper measures, so that, every probabilistic credence function expects itself to be better than any alternative credence function. From this, we can conclude that, on the extreme permissivist view, rationality is sufficient for immodesty. It's trickier to show that it is also necessary, since it isn't clear what we mean by the expected epistemic utility of a credence function from the point of view of a non-probabilistic credence function -- the usual definitions of expectation make sense only for probabilistic credence functions. Fortunately, however, we don't have to clarify this much. We need only say that, at the very least, if one credence function is epistemically better than another at all possible worlds -- that is, in decision theory parlance, the first dominates the second -- then any credence function, probabilistic or not, will expect the first to be better than the second. We then combine this with the result that, if epistemic utility is measured by a stricty proper measure, then, for each non-probabilistic credence function, there is a probabilistic credence function that dominates it, while for each probabilistic credence function, there is no such dominator (this result traces back to Savage's 1971 paper; Predd, et al. give the proof in detail when the measure is additive; I then generalised it to remove the additivity assumption). This then shows that being rational is necessary for being immodest. So, according to Horowitz's answer on behalf of the extreme permissivist, being rational is good and being irrational is bad because being rational is necessary and sufficient for being immodest; and it's good to be immodest and bad to be modest.

On the other hand, the impermissivist can say that, by being rational, you are maximizing expected accuracy from the point of view of the one true rational credence function. That's their answer to the value question, according to Horowitz.

We'll return to the question of whether these answers are satisfying below. But first I want to turn to Horowitz's claim that the moderate Bayesian cannot give a satisfactory answer. I'll argue that, if the two answers just given on behalf of the extreme permissivist and extreme impermissivist are satisfactory, then there is a satisfactory answer that the moderate permissivist can give. Then I'll argue that, in fact, these answers aren't very satisfying. And I'll finish by sketching my preferred answer on behalf of the moderate permissivist. This is inspired by William James' account of epistemic risks in The Will to Believe, which leads me to discuss another Horowitz paper.

Horowitz's strategy is to show that the moderate permissivist cannot find a good epistemic feature of credence functions that belongs to all that they count as rational, but does not belong to any they count as irrational. The extreme permissivist can point to immodesty; the extreme impermissivist can point to maximising expected epistemic utility from the point of view of the sole rational credence function. But, for the moderate, there's nothing. Or so Horowitz argues.

For instance, Horowitz initially considers the suggestion that rational credence functions guarantee you a minimum amount of epistemic utility. As she notes, the problem with this is that either it leads to impermissivism, or it fails to include all and only the credence functions the moderate considers rational. Let's focus on the case in which we have opinions about a proposition and its negation -- the point generalizes to richer sets of propositions. We'll represent the credence functions as pairs $(c(X), c(\overline{X}))$. And let's measure epistemic utility using the Brier score. So, when $X$ is true, the epistemic utility of $(x, y)$ is $-(1-x)^2 - y^2$, and when $X$ is false, it is $-x^2 - (1-y)^2$. Then, for $r > -0.25$, there is no credence function that guarantees you at least epistemic value $-0.25$ -- if you have at least that epistemic value at one world, you have less than that epistemic value at a different world. For $r = 0.25$, there is exactly one credence function that guarantees you at least epistemic value $-0.25$ -- it is the uniform credence function $(0.5, 0.5)$. And for $r < -0.25$, there are both probabilistic and non-probabilistic credence functions that guarantee you at least epistemic utility $r$. So, Horowitz concludes, a certain level of guaranteed epistemic utility can't be what separates the rational from the irrational for the moderate permissivist, since for any level, either no credence function guarantees it, exactly one does, or there are both credence functions the moderate considers rational and credence functions they consider irrational that guarantee it.

She identifies a similar problem if we think not about guaranteed accuracy but about expected accuracy. Suppose, as the moderate permissivist urges, that some but not all probability functions are rationally permissible. Then for many rational credence functions, there will be irrational ones that they expect to be better than they expect some rational credence functions to be. Horowitz gives the example of a case in which the rational credence in $X$ is between 0.6 and 0.8 inclusive. Then someone with credence 0.8 will expect the irrational credence 0.81 to be better than it expects the rational credence 0.7 to be -- at least according to many many strictly proper measures of epistemic utility. So, Horowitz concludes, whatever separates the rational from the irrational, it cannot be considerations of expected epistemic utility.

I'd like to argue that, in fact, Horowitz should be happy with appeals to guaranteed or expected epistemic utility. Let's take guaranteed utility first. All that the moderate permissivist needs to say to answer the value question is that there are two valuable things that you obtain by being rational: immodesty and a guaranteed level of epistemic utility. Immodesty rules out all non-probabilistic credence functions, while the guaranteed level of epistemic utility narrows further -- how narrow depends on how much epistemic utility you wish to guarantee. So, for instance, suppose we say that the rational credence functions are exactly those $(x, 1-x)$ with $0.4 \leq x \leq 0.6$. Then each is immodest. And each has a guaranteed epistemic utility of at least $-(1-0.4)^2 - 0.6^2 = -0.72$. If Horowitz is satisfied with the immodesty answer to the value question when the extreme permissivist gives it, I think she should also be satisfied with it when the moderate permissivist combines it with a requirement not to risk certain low epistemic utilities (in this case, utilities below $-0.72$). And this combination of principles rules in all of the credence functions that the moderate counts as rational and rules out all they count as irrational.

Next, let's think about expected epistemic utility. Suppose that the set of credence functions that the moderate permissivist counts as rational is a closed convex set. For instance, perhaps the set of rational credence function is $$R = \{c : \{X, \overline{X}\} \rightarrow [0, 1] : 0.6 \leq c(X) \leq 0.8\ \&\ c(\overline{X}) = 1- c(X)\}$$ Then we can prove the following: if a credence function is not in $R$, then there is $c^*$ in $R$ such that each $p$ in $R$ expects $c^*$ to be better than it expects $c$ to be (for the proof strategy, see Section 3.2 here, but replace the possible chance functions with the rational credence functions). Thus, just as the extreme impermissivist answers the value question by saying that, if you're irrational, there's a credence function the unique rational credence function prefers to yours, while if you're rational, there isn't, the moderate permissivist can say that, if you're irrational, there is a credence function that all the rational credence functions prefer to yours, while if you're rational, there isn't. 

Of course, you might think that it is still a problem for moderate permissivists that there are rational credence functions that expect some irrational credence functions to be better than some alternative rational ones. But I don't think Horowitz will have this worry. After all, the same problem affects extreme permissivism, and she doesn't take issue with this -- at least, not in the paper we're considering. For any two probabilistic credence functions $p_1$ and $p_2$, there will be some non-probabilistic credence function $p'_1$ that $p_1$ will expect to be better than it expects $p_2$ to be -- $p'_1$ is just a very slight perturbation of $p_1$ that makes it incoherent; a perturbation small enough to ensure it lies closer to $p_1$ than $p_2$ does.

A different worry about the account of the value of rationality that I have just offered on behalf of the moderate permissivist is that it seems to do no more than push the problem back a step. It says that all irrational credence functions have a flaw that all rational credence functions lack. The flaw is this: there is an alternative preferred by all rational credence functions. But to assume that this is indeed a flaw seems to presuppose that we should care how rational credence functions evaluate themselves and other credence functions. But isn't the reason for caring what they say exactly what we have been asking for? Isn't the person who posed the value question in the first place simply going to respond: OK, but what's so great about all the rational credence functions expecting something else to be better, when the question on the table is exactly why rational credence functions are so good?

This is a powerful objection, but note that it applies equally well to Horowitz's response to the value question on behalf of the impermissivist. There, she claims that what is good about being rational is that you thereby maximise expected accuracy from the point of view of the unique rational credence function. But without an account of what's so good about being rational, I think we equally lack an account of what's so good about maximizing expected accuracy from the point of view of the rational credence functions.

So, in the end, I think Horowitz's answer to the value question on behalf of the impermissivist and my proposed expected epistemic utility answer on behalf of the moderate permissivist are ultimately unsatisfying.

What's more, Horowitz's answer on behalf of the extreme permissivist is also a little unsatisfying. The answer turns on the claim that immodesty is a virtue, together with the fact that precisely those credence functions identified as rational by subjective Bayesianism have that virtue. But is it a virtue? Just as arrogance in a person might seem excusable if they genuinely are very competent, but not if they are incompetent, so immodesty in a credence function only seems virtuous if the credence function itself is good. If the credence function is bad, then evaluating itself as uniquely the best seems just another vice to add to its collection. 

So I think Horowitz's answer to the value question on behalf of the extreme permissivist is a little unsatisfactory. But it lies very close to an answer I find compelling. That answer appeals not to immodesty, but to non-dominance. Having a credence function that is dominated is bad. It leaves free epistemic utility on the table in just the same way that a dominated action in practical decision theory leaves free pragmatic utility on the table. For the extreme permissivist, what is valuable about rationality is that it ensures that you don't suffer from this flaw. 

One noteworthy feature of this answer is the conception of rationality to which it appeals. On this conception, the value of rationality does not derive fundamentally from the possession of a positive feature, but from the lack of a negative feature. Ultimately, the primary notion here is irrationality. A credence function is irrational if it exhibits certain flaws, which are spelled out in terms of its success in the pursuit of epistemic utility. You are rational if you are free of these flaws. Thus, for the extreme permissivist, there is just one such flaw -- being dominated. So the rational credences are simply those that lack that flaw -- and the maths tells us that those are precisely the probabilistic credence functions.

We can retain this conception of rationality, motivate moderate permissivism, and answer the value question for it. In fact, there are at least two ways to do this. We have met something very close to one of these ways when we tried to rehabilitate the moderate permissivist's appeal to guaranteed epistemic utility above. There, we said that what makes rationality good is that it ensures that you are immodest and also ensures a certain guaranteed level of accuracy. But, a few paragraphs back, we argued that immodesty is no virtue. So that answer can't be quite right. But we can replace the appeal to immodesty with an appeal to non-dominance, and then the answer will be more satisfying. Thus, the moderate permissivist who says that the rational credence functions are exactly those $(x, 1-x)$ with $0.4 \leq x \leq 0.6$ can say that being rational is valuable for two reasons: (i) if you're rational, you aren't dominated; (ii) if you're rational you are guaranteed to have epistemic utility at least $-0.72$; (iii) only if you are rational will (i) and (ii) both hold. This answers the value question by appealing to how well credence functions promote epistemic utility, and it separates out the rational from the irrational precisely.

To explain the second way we might do this, we invoke William James. Famously, in The Will to Believe, James said that we have two goals when we believe: to believe truth, and to avoid error. But these pull in different directions. If we pursue the first by believing something, we open ourselves up to the possibility of error. If we pursue the second by suspending judgment on something, we foreclose the possibility of believing the truth about it. Thus, to govern our epistemic life, we must balance these two goals. James held that how we do this is a subjective matter of personal judgment, and a number of different ways of weighing them are permissible. Thomas Kelly has argued that this can motivate permissivism in the case of full beliefs. Suppose the epistemic utility you assign to getting things right -- that is, believing truths and disbelieving falsehoods -- is $R > 0$. And suppose you assign epistemic utility $-W < 0$ to getting things wrong -- that is, disbelieving truths and believing falsehoods. And suppose you assign $0$ to suspending judgment. And suppose $W > R$. Then, as Kenny Easwaran and Kevin Dorst have independently pointed out, if $r$ is the evidential probability of $X$, believing $X$ maximises expected epistemic utility from its point of view iff $\frac{W}{R + W} \leq r$, while suspending on $X$ maximises expected epistemic utility iff $\frac{R}{W+R} \leq r \leq \frac{W}{R+W}$. If William James is right, different values for $R$ and $W$ are permissible. The more you value believing truths, the greater will be $R$. The more you value avoiding falsehoods, the greater will be $W$ (and the lower will be $-W$). Thus, there will be a possible evidential probability $r$ for $X$, as well as permissible values $R$, $R'$ for getting things right and permissible values $W$, $W'$ for getting things wrong such that $$\frac{W}{R+W} < r < \frac{W'}{R'+W'}$$So, for someone with epistemic utilities characterised by $R$, $W$, it is rational to suspend judgment on $X$, while for someone with $W'$, $R'$, it is rational to believe $X$. Hence, permissivism about full beliefs.

As Horowitz points out, however, the same trick won't work for credences. After all, as we've seen, all legitimate measures of epistemic utility for credences are strictly proper measures. And thus, if $r$ is the evidential probability of $X$, then credence $r$ in $X$ uniquely maximises expected epistemic utility relative to any one of those measures. So, a Jamesian permissivism about measures of epistemic value gives permissivism about doxastic states in the case of full belief, but not in the case of credence.

Nonetheless, I think we can derive permissivism about credences from James' insight. The key is to encode our attitudes towards James' two great goals for belief not in our epistemic utilities but in the rule we adopt when we use those epistemic utilities to pick our credences. Here's one suggestion, which I pursued at greater length in this paper a few years ago, and that I generalised in some blog posts over the summer -- I won't actually present the generalization here, since it's not required to make the basic point. James recognised that, by giving yourself the opportunity to be right about something, you thereby run the risk of being wrong. In the credal case, by giving yourself the opportunity to be very accurate about something, you thereby run the risk of being very inaccurate. In the full belief case, to avoid that risk completely, you must never commit on anything. It was precisely this terror of being wrong that he lamented in Clifford. By ensuring he could never be wrong, there were true beliefs to which Clifford closed himself off. James believed that the extent to which you are prepared to take these epistemic risks is a passional matter -- that is, a matter of subjective preference. We might formalize it using a decision rule called the Hurwicz criterion. This rule was developed by Leonid Hurwicz for situations in which no probabilities are not available to guide our decisions, so it is ideally suited for the situation in which we must pick our prior credences. 

Maximin is the rule that says you should pay attention only to the worst-case scenario and choose a credence function that does best there -- you should maximise your minimum possible utility. Maximax is the rule that says you should pay attention only to the best-case scenario and choose a credence function that does best there -- you should maximise your maximum possible utility. The former is maximally risk averse, the latter maximally risk seeking. As I showed here, if you measure epistemic utility in a standard way, maximin demands that you adopt the uniform credence function -- its worst case is best. And almost however you measure epistemic utility, maximax demands that you pick a possible world and assign maximal credence to all propositions that are true there and minimal credence to all propositions that are false there -- its best case, which obviously occurs at the world you picked, is best, because it is perfect there. 

The Hurwicz criterion is a continuum of decision rules with maximin at one end and maximax at the other. You pick a weighting $0 \leq \lambda \leq 1$ that measures how risk-seeking you are and you define the Hurwicz score of an option $a$, with utility $a(w)$ at world $w$, to be$$H^\lambda(a) = \lambda \max \{a(w) : w \in W\} + (1-\lambda) \min \{a(w) : w \in W\}$$And you pick an option with the highest Hurwicz score.

Let's see how this works out in the simplest case, namely, that in which you have credences only in $X$ and $\overline{X}$. As before, we write credence functions defined on these two propositions as $(c(X), c(\overline{X})$. Then, if $\lambda \leq \frac{1}{2}$ --- that is, if you give at least as much weight to the worst case as to the best case --- then the uniform distribution $(\frac{1}{2}, \frac{1}{2})$ maximises the Hurwicz score relative to any strictly proper measure. And if $\lambda > \frac{1}{2}$ --- that is, if you are risk seeking and give more weight to the best case than the worst --- then $(\lambda, 1 - \lambda)$ and $(1-\lambda, \lambda)$ both maximise the Hurwicz score.

Now, if any $0 \leq \lambda \leq 1$ is permissible, then so is any credence function $(x, 1-x)$, and we get extreme permissivism. But I think we're inclined to say that there are extreme attitudes to risk that are not rationally permissible, just as there are preferences relating the scratching of one's finger and the destruction of the world that are not rationally permissible. I think we're inclined to think there is some range from $a$ to $b$ with $0 \leq a < b \leq 1$ such that the only rational attitudes to risk are precisely those encoded by the Hurwicz weights that lie between $a$ and $b$. If that's the case, we obtain moderate permissivism.

To be a bit more precise, this gives us both moderate interpersonal and intrapersonal permissivism. It gives us moderate interpersonal permissivism if $\frac{1}{2} < b < 1$ -- that is, if we are permitted to give more than half our weight to the best case epistemic utility. For then, since $a < b$, there is $b'$ such that $\frac{1}{2} < b' < b$, and then both $(b, 1-b)$ and $(b', 1-b')$ are both rationally permissible. But there is also $b < b'' < 1$, and for any such $b''$, $(b'', 1-b'')$ is not rationally permissible. It also gives us moderate intrapersonal permissivism under the same condition. For if $\frac{1}{2} < b$ and $b$ is your Hurwicz weight, then for you, both $(b, 1-b)$ and $(1-b, b)$ are different, but both are rationally permissible.

How does this motivation for moderate permissivism fare with respect to the value question? I think it fares as well as the non-dominance-based answer I sketched above for the extreme permissivist. There, I appealed to a single flaw that a credence function might have: it might be dominated by another. Here, I introduced another flaw. It might be rationalised only by Jamesian attitudes to epistemic risk that are too extreme or otherwise beyond the pale. Like being dominated, this is a flaw that relates to the pursuit of epistemic utility. If you exhibit it, you are irrational. And to be rational is to be free of such flaws. The moderate permissivist can thereby answer the value question that Horowitz poses.

Comments

  1. As noted, the national party wanted to change the law to stop immigration to Latvia by individuals obtaining residence permits through real estate, before 2014, and instead focus efforts on assistance for Latvian families and care for new mothers. http://www.immigration-residency.eu/residence-permit-latvia/immigration-law/

    ReplyDelete

Post a Comment