Self-recommending decision theories for imprecise probabilities

A PDF version of this post is available here.

The question of this blogpost is this: Take the various decision theories that have been proposed for individuals with imprecise probabilities---do they recommend themselves? It is the final post in a trilogy on the topic of self-recommending decision theories (the others are here and here).

One precise kitten and one imprecise kitten

Let's begin by unpacking the question.

First, imprecise probabilities (sometimes known as mushy credences; for an overview, see Seamus Bradley's SEP entry here). For various reasons, some formal epistemologists think we should represent an individual's beliefs not by a precise probability function, which assigns to each proposition about which they have an option a single real number between 0 and 1, but rather a set of such functions. Some think that, whatever rationality requires of them, most individuals simply don't make sufficiently strong and detailed probabilistic judgments to pick out a single probability function. Others think that, at least when the individual's evidence is very complex or very vague or very sparse, rationality in fact requires them not to make judgments that pick out just one function. Whatever the reason, many think we should represent an individual's beliefs by a set $P$ of probability functions, which we call their representor, following van Fraassen. One way to understand a representor is like this: $P$ contains all the probability functions that respect the probabilistic judgments that the individual makes. For instance, if they judge that proposition $A$ is more likely than $B$, then every function in $P$ should assign higher probability to $A$ than to $B$; if they judge $A$ is more likely than not, then every function in $P$ should assign higher probability to $A$ than to its negation. If these are the only two probabilistic judgments they make, then their representor will be $P = \{p : p(A) > p(B)\ \&\ p(A) > p(\overline{A})\}$. And this clearly contains more than one probability function!

Second, decision theories for an individual with imprecise probabilities. Suppose you have opinions only about two possible worlds $w_1$ and $w_2$, which are exclusive and exhaustive. And suppose your representor is$$P = \{(x, 1-x) : 0.3 \leq x \leq 0.4\}$$where $(x, 1-x)$ is the probability function that assigns probability $x$ to $w_1$ and $1-x$ to $w_2$. That is, you think $w_1$ is between 30% and 40% likely, and $w_2$ is between 60% and 70% likely, but you make no stronger judgment than that. And now suppose you face a decision problem that consists of a choice between two options, $a$ and $b$, with the following payoff table:$$\begin{array}{r|cc}& w_1 & w_2 \\ \hline a & 13 & 0 \\ b & 0 & 7 \end{array}$$Then, if you take the probability function $(0.3, 0.7)$, which takes $w_1$ to be 30% likely and $w_2$ to be 70% likely, then it expects $b$ to be better than $a$, but if you take the probability function $(0.4, 0.6)$, which takes $w_1$ to be 40% likely and $w_2$ to be 60% likely, then it expects $a$ to be better than $b$. How, then, should you choose? It turns out that there are many possibilities! I'll list what I take to be the main contenders below before asking whether any of them recommend themselves.

Third, self-recommending decision theories. Suppose you are unsure what decision problem you're about to face. Indeed you think each possible decision problem is equally likely. Then you can use any decision theory to pick between the available decision theories that you might use when faced with whichever decision problem arises. A decision theory is self-recommending if it says that it's permissible to pick itself.

Let's meet the decision theories for imprecise probabilities. This list may well not be comprehensive, but I've tried to identify the main ones (thanks to Jason Konek for an impromptu tutorial on $E$-admissibility and Maximality!). I state them in terms of impermissibility, but nothing hangs on that.

Suppose $P$ is a set of probability functions and $O$ is a set of options. Following Brian Weatherson, given a particular option $o$, we let $$l_o = \min_{p \in P} \mathrm{Exp}_p(o) \hspace{10mm} \text{ and } \hspace{10mm} u_o = \max_{p \in P} \mathrm{Exp}_p(o)$$

Global Dominance  $o$ in $O$ is impermissible iff there is $o'$ in $O$ such that $u_o < l_{o'}$.

$\Gamma$-Maximin  $o$ in $O$ is impermissible iff there is $o'$ in $O$ such that $l_o < l_{o'}$.

$\Gamma$-Maxi  $o$ in $O$ is impermissible iff there is $o'$ in $O$ such that one of the following hold:

  • $l_o < l_{o'}$ and $u_o < u_{o'}$
  • $l_o < l_{o'}$ and $u_o = u_{o'}$
  • $l_o = l_{o'}$ and $u_o < u_{o'}$

$\Gamma$-Hurwicz$_\lambda$  $o$ in $O$ is permissible iff there is $o'$ in $O$ such that $$\lambda l_o + (1-\lambda) u_o < \lambda l_{o'} + (1-\lambda) u_{o'}$$

$E$-Admissibility  $o$ in $O$ is impermissible iff for all $p$ in $P$ there is $o'$ in $O$ such that$$\mathrm{Exp}_p(o) < \mathrm{Exp}_p(o')$$

Maximality  $o$ in $O$ is impermissible iff there is $o'$ in $O$ such that, for all $p$ in $P$,$$\mathrm{Exp}_p(o) < \mathrm{Exp}_p(o')$$

Notice that the difference between $E$-Admissibility and Maximality is in the order of the quantifiers.

Next, let me specify the situation in which we're testing for self-recommendation a little more precisely. We imagine that there's a maximum utility that you might receive in the decision problem, let's say $n$; and the utilities you might receive come from $\{0, 1, 2, \ldots, n\}$. And we imagine that the decision problem will consist of either three available options, $a, b, c$, each defined on the two worlds $w_1$ and $w_2$. So each decision problem has a payoff table like this where $a_1, a_2, b_1, b_2, c_1, c_2$ come from $\{0, 1, 2, \ldots, n\}$:$$\begin{array}{r|cc}& w_1 & w_2 \\ \hline a & a_1 & a_2 \\ b & b_1 & b_2 \\ c & c_1 & c_2 \end{array}$$And we assume that each such decision problem is equally probable; and we assume that which decision problem you face is independent of which world you inhabit. So the probability of being at world $w_1$ and facing the decision problem $(a_1, a_2, b_1, b_2, c_1, c_2)$ is the probability of being at world $w_1$ multiplied by the probability of facing $(a_1, a_2, b_1, b_2, c_1, c_2)$, which is $\frac{1}{n^6}$.

For all of the decision theories we've mentioned, they will sometimes permit more than one option: for instance, Global Dominance permits an option $o$ if, for any other option $o'$, $u_o > l_{o'}$. In these cases, we assume that the individual picks at random between the permissible options.

Now, let $P = \{(x, 1-x) : 0.3 \leq x \leq 0.4\}$. Then all of the decision theories listed above are not self-recommending. Indeed, every one of them prefers using expected utility with probability function $m = (0.35, 0.65)$ to using themselves. That is:

  • the maximum expected utility of using Global Dominance with $P$ is less than the minimum expected utility of using Expected Utility with $m$;
  • the minimum expected utility of using $\Gamma$-Maximin with $P$ is less than the minimum expected utility of using Expected Utility with $m$;
  • the minimum expected utility of using $\Gamma$-Maxi is less than the minimum expected utility of using Expected Utility with $m$ and the maximum expected utility of using $\Gamma$-Maxi is less than the maximum expected utility of using Expected Utility with $m$; 
  • the weighted average of the minimum and maximum expected utilities of using $\Gamma$-Hurwicz$_\lambda$ with $P$ is less than the weighted average of the minimum and maximum expected utilities of using Expected Utility with $m$;
  • for all $p$ in $P$, the expected utility of using $E$-admissibility with $P$ is less than the expected utility of using Expected Utility with $m$;
  • for all $p$ in $P$, the expected utility of using Maximality with $P$ is less than the expected utility of using Expected Utility with $m$.

Comments