Self-recommending decision theories
A PDF of this blogpost is available here.
Once again, I find myself stumbling upon a philosophical thought that seems so natural that I feel reasonably confident it must have been explored before, but I can't find where. So, in this blogpost, I'll set it out in the hope that a kind reader will know where to find a proper version already written up fully.*
I'd like to develop a type of objection that might be raised against a theory of rational decision-making. Here, I'll raise it against Lara Buchak's risk-weighted expected utility theory, in particular, but there will be many other theories to which it applies.
In brief, the objection applies to decision theories that are not self-recommending. That is, it applies to a decision theory if there is a particular instance of that theory that recommends that you use some alternative decision theory to make your decision; if you were to use this decision theory to choose which decision theory to use to make your choices, it would tell you to choose a different one, and not itself. We might naturally say that a decision theory that is not self-recommending in this sense is not a coherent means by which to make decisions, and that seems to be a strong strike against it.
A self-recommending Timothy Dalton |
On what basis might we criticize a theory of rational decision making? One popular way is to show that any individual who adopts the theory is exploitable; that is, there are decisions they might face in response to which the theory will lead them to choose certain options when there are alternative options that are guaranteed to be better; that is, in the jargon of decision theory, there are alternatives that dominate the recommendations of the decision theory in question. This is the sort of objection that money pump arguments raise against their targets. For instance, suppose my decision theory permits cyclical preferences, so that I may prefer $a$ to $b$, $b$ to $c$, and $c$ to $a$. Then, if I have those preferences and choose in line with them, the money pump argument notes that I will choose $b$ over $a$, when presented with a choice between them, then pay money to swap $b$ for $c$, when presented with that option, and then pay money again to swap $c$ for $a$, when presented with that possibility. However, I might simply have chosen $a$ in the first place and then refrained from swapping thereafter, and I would have ended up better off for sure. The second sequence of choices dominates the first. So, the exploitability argument concludes, cyclical preferences are irrational and so any decision theory that permits them is flawed.
This sort of objection is also often raised against decision theories that permit sensitivity to risk. For instance, take the most extreme risk-averse decision theory available, namely Abraham Wald's Maximin. This doesn't just permit sensitivity to risk---it demands it. It says that, in any decision problem, you should choose the option whose worst-case outcome is best. So, suppose you are faced with the choice between $a$ and $b$, both of which are defined at two possible worlds, $w_1$ and $w_2$:
$$\begin{array}{r|cc}& w_1 & w_2 \\ \hline a & 3 & 0 \\ b & 1 & 1 \end{array}$$
Then Maximin says that you should choose $b$, since its worst-case outcome gives 1 utile, while the worst-case outcome of $a$ gives 0 utiles.
Now, after facing that first decision, and choosing $b$ in line with Maximin, suppose you're now faced with the choice between $c$ and $d$:
$$\begin{array}{r|cc} & w_1 & w_2 \\ \hline a' & 0 & 3 \\ b' & 1 & 1 \end{array}$$
You choose $b'$, just as Maximin requires. But it's easy to see that $a$ and $a'$, taken together, dominate $b$ and $b'$. $a + a'$ gives 3 utiles for sure, while $b + b'$ gives 2 utiles for sure.
$$\begin{array}{r|cc} & w_1 & w_2 \\ \hline a+a' & 3 & 3 \\ b+b' & 2 & 2 \end{array} $$
So Maximin is exploitable.
I've argued in various places that I don't find exploitability arguments compelling.** They show only that the decision rule will lead to a bad outcome when the decision maker is faced with quite a specific series of decision problems. But that tells me little about the performance of the decision rule over the vast array of possible decisions I might face. Perhaps an exploitable rule compensates for its poor performance in those particular cases by performing extremely well in other cases. For all the exploitability objection tells me, that could well be the case.
Recognising this problem, you might instead ask: how does this decision rule perform on average over all decision problems you might face? And indeed it's easy to show that decision theories that disagree with expected utility theory will perform worse on average than expected utility theory itself. But that's partly because we've stacked the deck in favour of expected utility theory. After all, looking at the average performance over all decision problems is just looking at the expected performance from the point of view of a credence function that assigns equal probability to all possible decision problems. And, as we'll show explicitly below, expected utility theory judges itself to be the best decision theory to use; that is, it does best in expectation; that is, it does best on average.
But while this argument begs the question against non-expected utility theories, it does suggest a different way to test a decision theory: ask not whether it does best on average, and thus by the lights of expected utility theory; ask rather whether it does best by its own lights; ask whether it judges itself to be the best decision theory. Of course, this is a coherence test, and like all coherence tests, passing it is not sufficient for rationality. But it does seem that failing is sufficient for irrationality. It is surely irrational to use a method for selecting the best means to your ends that does not think it is the best method for selecting the best means to your ends.
Let's begin by seeing a couple of theories that pass the test. Expected utility theory is the obvious example, and running through that will allow us to set up the formal framework. We begin with the space of possible states. There are two components to these states:
- Let $W$ be the set of possible worlds grained finely enough to determine the utilities of all the options between which you will pick;
- Let $D$ be the set of decision problems you might face.
Then a state is a pair $(d, w)$ consisting of a decision problem $d$ from $D$ and a possible world $w$. And now we define your credence function $p : W \times D \rightarrow [0, 1]$. So $p(w\ \&\ d)$ is your credence that you're at world $w$ and will face decision problem $d$.
Then expected utility theory says that, faced with a decision problem $d$ from $D$, you should pick an option $a$ from among those in $d$ that have maximal expected utility from the point of view of $p$. Given option $a$ and world $w$, we write $a(w)$ for the utility of $a$ at $w$. So the expected utility of $a$ is
$$
\sum_{w \in W} p(w | d)a(w)
$$
Now, let's ask how expected utility theory judges itself. Given a decision theory $R$ and a decision problem $d$, let $R(d)$ be the option in $d$ that $R$ requires you to take; so $R(d)(w)$ is the utility at world $w$ of the option from decision problem $d$ that decision theory $R$ requires you to take. Note that, in order for this to be well-defined for theories like $EU$ in which there might be multiple options with maximal expected utility, we must supplement those theories with a mechanism for breaking ties; but that is easily done. So $EU(d)$ is one of the acts in $d$ that maximises expected utility. Then expected utility theory assigns the following value or choiceworthiness to a decision theory $R$:
$$
EU(R) = \sum_{d \in D} \sum_{w \in W} p(d\ \&\ w)R(d)(w)
$$
Now, suppose $R$ is a decision theory and $d$ is a decision problem. Then
$$
\sum_{w \in W} p(w | d)EU(d)(w) \geq \sum_{w \in W} p(w | d)R(d)(w)
$$
with strict inequality if $R$ picks an option that doesn't maximise expected utility and $p$ assigns positive credence to a world $w$ at which this option differs from the one that does. But then
$$
\sum_{d \in D} p(d) \sum_{w \in W} p(w | d)EU(d)(w) \geq \sum_{d \in D} p(d) \sum_{w \in W} p(w | d)R(d)(w)
$$
So
$$
EU(EU) = \sum_{d \in D} \sum_{w \in W} p(d\ \&\ w)EU(d)(w) \geq \sum_{d \in D}\sum_{w \in W} p(d\ \&\ w)R(d)(w) = EU(R)
$$
with strict inequality if $p$ assigns positive credence to $d\ \&\ w$ where $R$ chooses an option that doesn't maximise expected utility and at world the utility of that option is different from the utility of the option that does. So, expected utility recommends itself.
Maximin, which we met above, is another self-recommending decision theory. What is the value or choiceworthiness that Maximin assigns to a decision theory $R$? It is the lowest utility you can obtain from using $R$ to make a decision:
$$
M(R) = \min_{\substack{d \in D \\ w \in W}} \{R(d)(w)\}
$$
Now, suppose $M$ judges $R$ to be better than it judges itself to be. That is, $$M(R) > M(M)$$Then pick a decision problem $d^\star$ and a world $w^\star$ at which $M$ obtains its minimum. That is, $M(d^\star)(w^\star) = M(M)$. And suppose $R$ picks option $a$ from $d^\star$. Then, for each world $w$, $a(w) > M(d^\star)(w^\star)$. If this were not the case, then $R$ would achieve as low a minimum as $M$. But then $M$ would have recommended $a$ instead of $M(d^\star)$ when faced with $d^\star$, since the worst-case of $a$ is better than the worst-case of $M(d^\star)$, which occurs at $w^\star$. So, Maximin is a self-recommending decision theory.
Now, Maximin is usually rejected as a reasonable decision rule for other reasons. For one thing, without further supplementary principles, it permits choices that are weakly dominated---that is, in some decision problems, it will declare one option permissible when there is another that is at least as good at all worlds and better at some. And since it pays no attention to the probabilities of the outcomes, it also permits choices that are stochastically dominated---that is, in some decision problems, it will declare one option permissible when there is another with the same possible outcomes, but higher probabilities for the better of those outcomes and lower probabilities for the worse. For another thing, Maximin just seems too extreme. It demands that you to take £1 for sure instead of a 1% chance of 99p and 99% chance of £10trillion.
An alternative theory of rational decision making that attempts to accommodate less extreme attitudes to risk is Lara Buchak's risk-weighted expected utility theory. This theory encodes your attitudes to risk in a function $r : [0, 1] \rightarrow [0, 1]$ that is (i) continuous, (ii) strictly increasing, and (iii) assigns $r(0) = 0$ and $r(1) = 1$. This function is used to skew probabilities. For risk-averse agents, the probabilities are skewed in such a way that worse-case outcomes receive more weight than expected utility theory gives them, and best-case outcomes receive less weight. For risk-inclined agents, it is the other way around. For risk-neutral agents, $r(x) = x$, the probabilities aren't skewed at all, and the theory agrees exactly with expected utility theory.
Now, suppose $W = \{w_1, \ldots, w_n\}$. Given a credence function $p$ and a risk function $r$ and an option $a$, if $a(w_1) \leq a(w_2) \leq \ldots \leq a(w_n)$, the risk-weighted expected utility of $a$ is
\begin{eqnarray*}
& & REU_{p, r}(a) \\
& = & a(w_1) + \sum^{n-1}_{i=1} r(p(w_{i+1}) + \ldots + p(w_n))(a(w_{i+1}) - a(w_i)) \\
& = & \sum^{n-1}_{i=1} [r(p(w_i) + \ldots + p(w_n)) - r(p(w_{i+1}) + \ldots + p(w_n))]a(w_i) + r(p(w_n))a(w_n)
\end{eqnarray*}
So the risk-weighted expected utility of $a$, like the expected utility of $a$, is a weighted sum of the various utilities that $a$ can take at the different worlds. But, whereas expected utility theory weights the utility of $a$ at $w_i$ by the probability of $w_i$, risk-weighted utility theory weights it by the difference between the skewed probability that you will receive at least $a(w_i)$ from choosing $a$ and the skewed probability that you will receive more than $a(w_i)$ from choosing $a$.
Here is an example to illustrate. The decision is between $a$ and $b$:
$$\begin{array}{r|cc} & w_1 & w_2 \\ \hline a & 1 & 4 \\ b & 2 & 2 \end{array}$$
And suppose $p(w_1) = p(w_2) = 0.5$ and $r(x) = x^2$, for all $0 \leq x \leq 1$. Then:
$$
REU_{p, r}(a) = 1 + r(w_2)(4-1) = 1 + \frac{1}{2}^23 = \frac{7}{4}
$$
while
$$
REU_{p, r}(b) = 2 + r(w_2)(2-2) = 2 = \frac{8}{4}
$$
So, while the expected utility of $a$ (i.e. 2.5) exceeds the expected utility of $b$ (i.e. 2), and so expected utility theory demands you pick $a$ over $b$, the risk-weighted expected utility of $b$ (i.e. 2) exceeds the risk-weighted expected utility of $a$ (i.e. 1.75), and so risk-weighted expected utility demands you pick $b$ over $a$.
Now we're ready to ask the central question of this post: does risk-weighted utility theory recommend itself? And we're ready to give our answer, which is that it doesn't.
It's tempting to think it does, and for the same reason that expected utility theory does. After all, if you're certain that you'll face a particular decision problem, risk-weighted expected utility theory recommends using it to make the decision. How could it not? After all, it recommends picking a particular option, and therefore recommends any theory that will pick that option, since using that theory will have the same utility as picking the option at every world. So, you might expect, it will also recommend itself when you're uncertain which decision you'll face. But risk-weighted expected utility theory doesn't work like that.
Let me begin by noting the simplest case in which it recommends something else. This is the case in which there are two decision problems, $d$ and $d'$, and you're certain that you'll face one or the other, but you're unsure which.
$$
\begin{array}{r|cc}
d & w_1 & w_2 \\
\hline
a & 3 & 6 \\
b & 2 & 8
\end{array}\ \ \
\begin{array}{r|cc}
d' & w_1 & w_2 \\
\hline
a' & 4 & 19 \\
b' & 7 & 9
\end{array}
$$
You think each is equally likely, you think each world is equally likely, and you think the worlds and decision problems are independent. So, $$p(d\ \&\ w_1) = p(d\ \&\ w_2)=p(d'\ \&\ w_1) = p(d'\ \&\ w_2) = \frac{1}{4}$$Then:
$$
REU(a) = 3 + \frac{1}{2}^2(6-3) = 3.75 > 3.5 = 2 + \frac{1}{2}^2(8-2) = REU(b)
$$
$$
REU(a') = 4 + \frac{1}{2}^2(19-4) = 7.75 > 7.5 = 7 + \frac{1}{2}^2(9-7) = REU(b')
$$
So REU will tell you to choose $a$ when faced with $d$ and $a'$ when faced with $d'$. Now compare that with a decision rule $R$ that tells you to pick $b$ and $b'$ respectively.
$$
REU(REU) = 3 + \frac{3}{4}^2(4-3) + \frac{1}{2}^2(6-4) + \frac{1}{4}^2(19-6) = 4.875
$$
and
$$
REU(R) = 2 + \frac{3}{4}^2(7-2) + \frac{1}{2}^2(8-7) + \frac{1}{4}^2(9-8) = 5.125
$$
So risk-weighted utility theory does not recommend itself in this situation. Yet it doesn't seem fair to criticize it on this basis. After all, perhaps it redeems itself by its performance in the face of other decision problems. In exploitability arguments, we only consider one series of decisions. Here, we only consider a pair of possible decisions. What happens when we have much much more limited information about the decision problems we'll face?
Let's suppose that there is a finite set of utilities, $\{0, \ldots, n\}$.*** And suppose you consider every two-world, two-option decision problem with utilities from that set is possible and equally likely. That is, the following decision problems are equally likely: decision problems $d$ in which there are exactly two available options, $a$ and $b$, which are defined only on mutually exclusive and exhaustive worlds $w_1$ and $w_2$, and where the utilities $a(w_1), a(w_2), b(w_1), b(w_2)$ lie in $\{0, \ldots, n\}$.
Here are some results: Set $n = 22$, and let $p(w_1) = p(w_2) = 0.5$. And consider risk functions of the form $r_k(x) = x^k$. For $k > 1$, $r_k$ is risk-averse; for $k = 1$, $r_k$ is risk-neutral and risk-weighted expected utility theory agrees with expected utility theory; and for $k < 1$, $r_k$ is risk-seeking. We say that $REU_{p, r_k}$ judges $REU_{p, r_{k'}}$ better than it judges itself if $$REU_{p, r_k}(REU_{p, r_k}) < REU_{p, r_k}(REU_{p, r_{k'}})$$And we write $REU_{p, r_k} \rightarrow REU_{p, r_{k'}}$. Then we have the following results:
$$
REU_{p, r_2} \rightarrow REU_{p, r_{1.5}} \rightarrow REU_{p, r_{1.4}} \rightarrow REU_{p, r_{1.3}} \rightarrow REU_{p, r_{1.2}} \rightarrow REU_{p, r_{1.1}}
$$
and
$$
REU_{p, r_{0.5}} \rightarrow REU_{p, r_{0.6}} \rightarrow REU_{p, r_{0.7}} \rightarrow REU_{p, r_{0.8}} \rightarrow REU_{p, r_{0.9}}
$$
So, for many natural risk-averse and risk-seeking risk functions, risk-weighted utility theory isn't self-recommending. And this, it seems to me, is a problem for these versions of the theory.
Now, for all my current results say, it's possible that there is a risk function other than $r(x) = x$ for which the theory recommends itself. But my conjecture is that this doesn't happen. The present results suggest that, for each risk-averse function, there is a less risk-averse one that it judges better, and for risk-seeking ones, there is a less risk-seeking one that it judges better. But even if there were a risk function for which the theory is self-recommending, that would surely limit the versions of risk-weighted expected utility theory that are tenable. That in itself would be an interesting result.
* The work of which I'm aware that comes closest to what interests me here is Catrin Campbell-Moore and Bernhard Salow's exploration of proper scoring rules for risk-sensitive agents in Buchak's theory. But it's not quite the same issue. And the idea of judging some part or whole of our decision-making apparatus by looking at its performance over all decision problems we might face I draw from Mark Schervish's and Ben Levinstein's work. But again, they are interested in using decision theories to judge credences, not using decision theories to judge themselves.
** In Section 13.7 of my Choosing for Changing Selves and Chapter 6 of my Dutch Book Arguments.
*** I make this restriction because it's the one for which I have some calculations; there's no deeper motivation. From fiddling with the calculations, it looks to me as if this restriction is inessential.
Comments
Post a Comment