Wednesday, 29 August 2018

A new (?) sort of Dutch Book argument: exploitability vs dominance

The exploitability-implies-irrationality argumentative strategy


In decision theory, we often wish to impose normative constraints either on an agent's preference ordering or directly on the utility function that partly determines it. We might demand, for instance, that your preferences should not be cyclical, or that your utility function should discount the future exponentially. And in Bayesian epistemology, we often wish to impose normative constraints on credences. We might demand, for instance, that your credence in one proposition should be no greater than your credence in another proposition that it entails. In both cases, we often use a particular argumentative strategy to establish these norms: we'll call it the exploitability-implies-irrationality strategy (or EII, for short). I want to start by arguing that this is a bad argumentative strategy; and then I want to describe a way to replace it with a good argumentative strategy that is inspired by the problem we have identified with EII. I want to finish by sketching a version of the good argumentative strategy that would replace the EII strategy in the case of credal norms; that is, in the case of the Dutch Book argument. I leave it open here whether a similar strategy can be made to work in the case of preferences or utility functions. (I think this alternative argument strategy is new---it essentially combines an old result by Mark Schervish (1989) with a more recent result by Joel Predd and his co-authors at Princeton (2009); so it wouldn't surprise me at all if something similar has been proposed before---I'd welcome any information about this.)

The EII strategy runs as follows:

(I) Mental state-action link. It begins by claiming that, for anyone with a particular mental state---a preference ordering, a utility function, a credence function, or some combination of these---it is rationally required of them to choose in a particular way when faced with a decision problem.

Some examples:
(i) someone with preference ordering $a \prec b$ is rationally required to pay some amount of money to receive $b$ rather than $a$;
(ii) someone with credence $p$ in proposition $X$ should pay £$(p-\epsilon)$ for a bet that pays out £1 if $X$ is true and £0 if $X$ is false---call this a £1 bet on $X$.

(II) Mathematical theorem. It proceeds to show that, for anyone with a mental state that violates the norm in question, there are decision problems the agent might face such that, if she does, then there are choices she might make in response to them that dominate the choices that premise (I) says are rationally required of her as a result of her mental state. That is, the first set of choices is guaranteed to leave her better off than the second set of choices.

Some examples:
(i) if $c \prec a \prec b \prec c$, then rationality requires you to pay to get $a$ rather than $c$, pay again to get $b$ rather than $a$, and pay again to get $c$ rather than $a$. If, instead, you'd just chosen $c$ at the beginning, and refused to pay anything to swap, you'd be better off for sure now.
(ii) if you have credence $p$ in $XY$ and a credence $q < p$ in $X$, then you will sell a £1 bet on $X$ for £$(q + \varepsilon)$, and you'll buy a £1 bet on $XY$ for £$(p-\varepsilon)$. Providing $3\varepsilon < p - q$, it is easy to see that, taken together, these bets lose you money for sure, and thus refusing both bets is guaranteed to leave you better off.

(III) Action-rationality link. The final premise says that, if there is some series of decision problems such that the choices your mental states rationally require you to make are dominated by some other set of choices you might have made instead, then your mental states are irrational.

Some examples:
(i) By (I-III)(i), we conclude that preferences $c \prec a \prec b \prec c$ are irrational.
(ii) By (I-III)(ii), we conclude that having a higher credence in a conjunction than in one of the conjuncts is irrational.

Now, there are often problems with the instance of (I) that is used in such EII arguments. For instance, there are many reasons to think rationality does not require someone with credence $p$ in $X$ to pay £$(p - \varepsilon)$ for a £1 bet on $X$. But my focus here is on (III).

The Problem with the Action-Rationality Link


The problem with (III) is this: It is clear that it is irrational to make a series of decisions when there is an alternative series that is guaranteed to do better---it is irrational because, when you act, you are attempting to maximise your utility and doing what you have done is guaranteed to be suboptimal as a means to that end; there is an alternative you can know a priori would serve that end better. But it is much less clear why it is irrational to have mental states that require you to make a dominated series of decisions when faced with a particular decision problem. When you choose a dominated option, you are irrational because there's something else you could have done that is guaranteed to serve your ends better. But when you have mental states that require you to choose a dominated option, that alone doesn't tell us that there is anything else you could have done---any alternative mental states you could have had---that are guaranteed to serve your ends better.

Of course, there is often something else you could have done that would not have required you to make the dominated choice. Let's focus on the case of credences. The Dutch Book Theorem shows that, if your credences are not probabilistic, then there's a series of decision problems and a dominated series of options from them that those credences require you to choose. The Converse Dutch Book Theorem shows that, if your credences are instead probabilistic, then there is no such series of decision problems and options. So it's true that there's something else you could do that's guaranteed not to require you to make a dominated choice. But making a dominated choice is not an eventuality so dreadful and awful that, if your credences require you to do it in the face of one particular sort of decision problem, they are automatically irrational, regardless of what they lead you to do in the face of any other decision problem and regardless of how likely it is that you face a decision problem in which they require it of you.

After all, for all the Dutch Book or Converse Dutch Book Theorem tell you, it might be that your non-probabilistic credences lead you to choose badly when faced with the very particular Dutch Book decision problem, but lead you to choose extremely profitably when faced with many other decision problems. Any indeed, even in the case of the Dutch Book decision problem, it might be that your non-probabilistic credences require you to choose in a way that leaves you a little poorer for sure, while all the alternative probabilistic credences require you to choose in a way that leaves you with the possibility of great gain, but also the risk of great loss. In this case, it is not obvious that the probabilistic credences are to be preferred. Furthermore, you might have reason to think that it is extremely unlikely you will ever face the Dutch Book decision problem itself. Or at least much more probable that you'll face other decision problems where your credences don't lead you to choose a dominated series of options. For all these reasons, the mere possibility of a series of decision problems from which your credences require you to choose a dominated series of options is not sufficient to show that your credences are irrational. To do this, we need to show that there are some alternative credences that are in some sense sure to serve you better as you face the decision problems that make up your life. Without these alternative that do better, pointing out a flaw in some mental state does not show that it is irrational, even if there are other mental states without the flaw---for those alternative mental states might have other strikes against them that the mental state in question does not have.

A new Dutch Book argument


So our question is now: Is there any sense in which, when you have non-probabilistic credences, there are some alternative credences that are guaranteed to serve you better as a guide in your decision-making? Borrowing from work by Mark Schervish ('A General Method for Comparing Probability Assessors', 1989) and Ben Levinstein ('A Pragmatist's Guide to Epistemic Utility', 2017), I want to argue that there is.

The pragmatic utility of an individual credence 


Our first order of business is to create a utility function that measures how good individual credences are as a guide to decision-making. Then we'll take the utility of a whole credence function to be the sum of the utilities of the credences that comprise it. (In fact, I think there's a way to do all this without that additivity assumption, but I'm still ironing out the creases in that.)

Suppose you assign credence $p$ to proposition $X$. Our job is to say how good this credence is as a guide to action. The idea is this:
  • an act is a function from states of the world to utilities---let $\mathcal{A}$ be the set of all acts;
  • an $X$-act is an act that assigns the same utility to all the worlds at which $X$ is true, and assigns the same utility to all worlds at which $X$ is false---let $\mathcal{A}_X$ be the set of all $X$-acts;
  • a decision problem is a set of acts; that is, a subset of $\mathcal{A}$---let $\mathcal{D}$ be the set of all decision problems;
  • an $X$-decision problem is a set of $X$-acts; that is, a subset of $\mathcal{A}_X$---let $\mathcal{D}_X$ be the set of all $X$-decision problems.
We suppose that there is a probability function $P$ that says how likely it is that the agent will face different $X$-decision problems---since the set of $X$-decision problems is infinite, we actually take $P$ to be a probability density function. The idea here is that $P$ is something like an objective chance function. With that in hand, we take the pragmatic utility of credence $p$ in proposition $X$ to be the expected utility of the choices that credence $p$ in $X$ will lead you to make when faced with the decision problems you will encounter. That is, it is the integral, relative to measure $P$, over the possible $X$-decision problems $D$ in $\mathcal{D}_X$ you might face, of the utility of the act you'd choose from $D$ using $p$, discounted by the probability that you'd face $D$. Given $D$ in $\mathcal{D}_X$, let $D^p$ be the act you'd choose from $D$ using $p$---that is, $D^p$ is one of the acts in $D$ that maximises expected utility by the lights of $p$. Thus, for any $D$ in $\mathcal{D}_X$, and any act $a$ in $D$,$$\mathrm{Exp}_p(u(a)) \leq \mathrm{Exp}_p(u(D^p))$$ Then we define the pragmatic utility of credence $p$ in $X$ when $X$ is true as follows:
$$g_X(1, p) = \int_{\mathcal{D}_X}u(D^p, X) dP$$ And we define the pragmatic utility of credence $p$ in $X$ when $X$ is false as follows:
$$g_X(0, p) = \int_{\mathcal{D}_X}u(D^p, \overline{X}) dP$$ These are slight modifications of Schervish's and Levinstein's definitions.


$g$ is a strictly proper scoring rule


Our next order of business is to show that this utility function for $g$ is a strictly proper scoring rule. That is, $\mathrm{Exp}_p(g_X(q)) = pg_X(1, q) + (1-p)g_X(0, q)$ is uniquely maximised, as a function of $q$, at $p = q$. We show this now:
\begin{eqnarray*}
\mathrm{Exp}_p(g_X(q)) & = & pg_X(1, q) + (1-p)g_X(0, q)\\
& = & p \int_{\mathcal{D}_X}u(D^q, X) dP + (1-p) \int_{\mathcal{D}_X}u(D^q, \overline{X}) dP \\
& = & \int_{\mathcal{D}_X}p u(D^q, X) + (1-p) u(D^q, \overline{X}) dP\\
& = & \int_{\mathcal{D}_X} \mathrm{Exp}_p(u(D^q)) dP
\end{eqnarray*}
But, by the definition of $D^q$, if $q \neq p$, then, for all $D$ in $\mathcal{D}_X$,
$$\mathrm{Exp}_p(u(D^q)) \leq \mathrm{Exp}_p(u(D^p))$$
and, for some $D$ in $\mathcal{D}_X$,
$$\mathrm{Exp}_p(u(D^q)) < \mathrm{Exp}_p(u(D^p))$$
Now, for two credences $p$ and $q$ in $X$, we say that a set of decision problems separates $p$ and $q$ if (i) each decision problem in the set contains only two available acts, (ii) for each decision problem in the set, $p$ expects one act to have higher expected value and $q$ expects the other to have higher expected value. Then, as long as there is some set of decision problems such that (i) that set separates $p$ and $q$ and (ii) $P$ assigns positive probability to this set, then
$$\mathrm{Exp}_p(g(q)) < \mathrm{Exp}_p(g(p))$$ And so the scoring rule $g$ is strictly proper.

The pragmatic utility of a whole credence function


The scoring rule $g_X$ we have just defined assigns pragmatic utilities to individual credences in $X$. In the next step, we define $G$, a pragmatic utility function that assigns pragmatic utilities to whole credence functions. We take the utility of a credence function to be the sum of the utilities of the individual credences it assigns. Suppose $c : \mathcal{F} \rightarrow [0, 1]$ is a credence function defined on the set of propositions $\mathcal{F}$. Then: $$G(c, w) = \sum_{X \in \mathcal{F}} g_X(w(X), c(X))$$ where $w(X) = 1$ if $X$ is true at $w$ and $w(X) = 0$ if $X$ is false at $w$. In this situation, we say that $G$ is generated from the scoring rules $g_X$ for $X$ in $\mathcal{F}$.




Predd, et al.'s Dominance Result


Finally, we appeal to a theorem due to Predd, et al. ('Probabilistic Coherence and Proper Scoring Rules', 2009):

Theorem (Predd, et al 2015) Suppose $G$ is generated from strictly proper scoring rules $g_X$ for $X$ in $\mathcal{F}$. Then,
(I) if $c$ is not a probability function, then there is a probability function $c^*$ such that, $G(c, w) < G(c^*, w)$ for all worlds $w$;
(II) if $c$ is a probability function, then there is no credence function $c^* \neq c$ such that $G(c, w) \leq G(c^*, w)$ for all worlds $w$.

This furnishes us with a new pragmatic argument for probabilism. And indeed, now that we have a pragmatic utility function that is generated from strictly proper scoring rules, we can take advantage of all of the epistemic utility arguments that make that same assumption, such as Greaves and Wallace's argument for Conditionalization, my arguments for the Principal Principle, the Principle of Indifference, linear pooling in judgment aggregation cases, and so on.

In this argument, we see that non-probabilistic credences are irrational not because there is some series of decision problems such that, when faced with them, the credences require you to make a dominated series of choices. Rather, they are irrational because there are alternative credences that are guaranteed to serve you better on average as a guide to action---however the world turns out, the expected or average utility you'll gain from making decisions using those alternative credences is greater than the expected or average utility you'll gain from making decisions using the original credences.

6 comments:

  1. Utility functions and credal probability functions are mental states?! Rationality norms mandate action (rather than flag inadmissibility)? Both are minority views, neither is necessary for an EII strategy, and the latter is (with some argument) incompatible with the canonical paradigm of *static* decision making under risk to which this machinery belongs.

    ReplyDelete
  2. Thanks, Greg! I guess I'm in that minority!

    ReplyDelete
  3. What evidence do you offer in support of the empirical claim that these functions are psychological states as opposed to components of a numerical representation of some or another qualitative comparative judgment or choice?

    ReplyDelete
  4. Many religions nowadays appeal to mystical faith and revelation—modes of belief that claim validity independent of logic and the scientific method, at least for the biggest questions.

    Deviant Behavior test bank & solutions manual

    ReplyDelete
  5. This comment has been removed by the author.

    ReplyDelete
  6. A kind of semantic curiosity.
    https://nathancoppedge.quora.com/Abstract-Polyps

    ReplyDelete