More on self-recommending decision theories
A PDF of this blogpost can be found here.
Last week, I wrote about how we might judge a decision theory by its own lights. I suggested that we might ask the decision theory whether it would choose to adopt itself as a decision procedure if it were uncertain about which decisions it would face. And I noted that many instances of Lara Buchak's risk-weighted expected utility theory (REU) do not recommend themselves when asked this question. In this post, I want to give a little more detail about that case, and also note a second decision theory that doesn't recommend itself, namely, $\Gamma$-Maximin (MM), a decision theory designed to be used when uncertainty is modeled by imprecise probabilities.
A cat judging you...harshly |
The framework is this. For the sake of the calculations I'll present here, we assume that you'll face a decision problem with the following features:
- there will be two available options, $a$ and $b$;
- each option is defined for two exhaustive and exclusive possibilities, which we'll call worlds, $w_1$ and $w_2$; so, each decision problem is determined by a quadruple $(a_1, a_2, b_1,
b_2)$, where $a_i$ is the utility of option $a$ at $w_i$, and $b_i$ is
the utility of $b$ at $w_i$;
- all of the utilities will be drawn from the set $\{0, 1, 2, \ldots, 20\}$; so, there are $21^4 = 194,481$ possible decision problems.
This is all you know about the decision problem you'll face. You place a uniform distribution over the possible decision problems you'll face. You take each to have probability $\frac{1}{194,481}$.
You also assign probabilities to $w_1$ and $w_2$.
- In the case of REU, you have a credence function $(p, 1-p)$ over $w_1$ and $w_2$, so that $p$ is your credence in $w_1$ and $1-p$ is your credence in $w_2$.
- In the case of MM, you represent your uncertainty by a set of such credence functions $\{(x, 1-x) : p \leq x \leq q\}$.
In the case of REU, you also have a risk function $r$. In this post, I'll only consider risk functions of the form $r(x) = r^k$. I'll write $r_k$ for that function.
With all of that in place, we can ask our question about REU. Fix your credence function $(p, 1-p)$. Now, given a particular risk function $r$, does REU-with-$r$ judge itself to be the best decision theory available? Or is there an alternative decision theory---possibly REU-with-a-different-risk-function, but not necessarily---such that the risk-weighted expected utility from the point of view of $r$ of REU-with-$r$ is less that the risk-weighted expected utility from the point of view of $r$ of this alternative? And the answer is that, for many natural choices of $r$, there is.
Let's see this in action. Let $p = \frac{1}{2} = 1-p$. Let's consider the risk functions $r_k(x) = x^k$ for $0.5 \leq k \leq 2$. Then the following table gives some results. A particular entry, (row $k$, column $k'$), gives the risk-weighted expected utility from the point of view of $r_{k'}$ of using risk-weighted expected utility from the point of view of $r_k$ to make your decisions. In each column $k'$, the entry in green is what the risk function $r_{k'}$ thinks of itself; the entries in blue indicate the risk functions $r_k$ that $r_{k'}$ judges to be better than itself; and the entry in red is the risk function $r_k$ that $r_{k'}$ judges to be best.
How risk-weighted expected utility theories judge each other |
There are a few trends to pick out:
- Risk-inclined risk functions ($r_{k'}(x) = x^{k'}$ for $0.5 \leq k' < 1$) judge less risk-inclined ones to the better than themselves;
- The more risk-inclined, the further away the risk functions can be and still count as better, so $r_{0.5}$ judges $r_{0.9}$ to be better than itself, but $r_{0.7}$ doesn't judge $r_1$ to be better;
- And similarly, mutatis mutandis, for risk-averse risk functions ($r_{k'}(x) = x^{k'}$ for $1 < k' \leq 2$). Each judges less risk-averse risk functions to be better than themselves;
- And the more risk-averse, the further away a risk function can be and still be judged better.
- It might look like $r_{0.9}$ and $r_{1.1}$ are self-recommending, but that's just because we haven't consider more fine-grained possibilities between them and $r_1$. When we do, we find they follow the pattern above.
- The risk-neutral risk function $r_1$ is genuinely self-recommending. REU with this risk function is just expected utility theory.
So much for REU. Let's turn now to MM. First, let me describe this decision rule. Suppose you face a decision problem between $a = (a_1, a_2)$ and $b = (b_1, b_2)$. Then, first, you decide how much you value each option. You take this to be the minimum expected utility it gets from credence functions in the set of credence functions that represent your uncertainty. That is, you take each $(x, 1-x)$ from your set of credence functions, so that $p \leq x \leq q$, you calculate the expected utility of $a$ relative to that credence function, and you value $a$ by the minimum expectation you come across. Then you pick whichever of the two options you value most. That is, you pick the one whose minimum expected utility is greatest.
Let's turn to asking how the theory judges itself. Here, we don't have different versions of the theory specified by different risk-functions. But let me consider different sets of credences that might represent our uncertainty. I'll ask how the theory judges itself, and also how it judges the version of expected utility theory (EU) where you use the precise credence function that sits at the midpoint of the credence functions in the set that represents your uncertainty. So, for instance, if $p = 0.3$ and $q = 0.4$ and the representor is $\{(x, 1-x) : 0.3 \leq x \leq 0.4\}$, I'll be comparing how MM thinks of itself and how it thinks of the version of expected utility theory that uses the precise credence function $(0.35, 0.65)$. In the first column here, we have the values of $p$ and $q$; in the second, we have the minimum expected utility for MM relative to each of these pairs of values; in the final column, we have the minimum expected utility for EU relative to those pairs.
$$\begin{array}{r|cc} & \text{MM} & \text{EU} \\ \hline (0.3, 0.4) & 12.486 & 12.490 \\ (0.3, 0.6) & 12.347 & 12.377 \\ (0.3, 0.9) & 12.690 & 12.817 \\ (0.1, 0.2) & 12.870 & 12.873\end{array}$$
Again, some notable features:
- in each case, MM judges EU to be better than itself (I suspect this is connected to the fact that ther is no strictly proper scores for imprecise credences, but I'm not sure quite how yet! For treatments of that, see Seidenfeld, Schervish, & Kadane, Schoenfield, Mayo-Wilson & Wheeler, and Konek.)
- greater uncertainty (which is represented by a broader range of credence functions) leads to a bigger difference between MM and EU;
- having a midpoint that lies further from the centre also seems to lead to a bigger difference.
At some point, I'll try to write up some thoughts about the consequences of these facts. Could a decision theory that does not recommend itself be rationally adopted? But frankly it's far too hot to think about that today.
Comments
Post a Comment