Transformative experiences and choosing when you don't know what you value

There is a PDF version of the blogpost here.

There are some things whose value for us we can't know until we have them. Love might be one of them, becoming a parent another. In many such cases, we can't know how much we value that thing until we have it because only by having it can we know what it's like. The nature of the experience of having it---how it feels, its phenomenal character---plays a crucial role in determining the value that we assign to it. What it will feel like to be in love is part of what determines its value for the person who will experience it. The phenomenal quality of the bliss when that love is reciprocated, what it's like for you to feel the pain when it's not---together with the other aspects of being in love, these determine its value for you. And, you might reasonably think, you can't know what it will be like until you experience it. Let's follow Laurie Paul in calling these epistemically transformative experiences. In this post, I'd like to consider Paul's argument in Transformative Experience that they pose a problem for our theories of rational choice. I used to think those theories had the resources to meet this challenge, but now I'm not so hopeful.

Sapphire, uncertain of her utilities, as of so much

To understand Paul's challenge, let me describe it in a particular case. Currently, I'm not a parent. Now, suppose I'm trying to decide whether or not to adopt a child. I know that many factors contribute to the value I assign to becoming a parent, but one of them is what the experience will be like for me. I might enjoy it enormously; I might find it on balance agreeable, but often exhausting and upsetting; or I might hate it. Looking around at my friends and hearing of others' experience of adoption, I come to the conclusion that there are two sorts of person. For one of them, the character of their child doesn't make any difference to their experience of being a parent---they enjoy the experience well enough regardless, and a bit more than not being a parent. For others, it makes an enormous difference---if the child's character is like their own, their experience of parenting is wonderful; if not, it's awful.

To put some numbers on this, suppose you conclude that you are one of these two types of person, but you don't know which, and the two types assign utilities as follows:$$\begin{array}{r|cc} U_1 & \textit{Similar} & \textit{Different} \\ \hline \textit{Child-free} & 0 & 0 \\ \textit{Parent} & 1 & 1 \end{array} \hspace{10mm}  \begin{array}{r|cc} U_2 & \textit{Similar} & \textit{Different} \\ \hline \textit{Child-free} & 0 & 0 \\ \textit{Parent} & 25 & -7 \end{array}$$How should you choose? Paul argues that our theories of rational choice can't help us, since, to apply them, we must know our utilities. To that challenge, I responded in Choosing for Changing Selves that you should simply incorporate your uncertainty about your utilities into your representation of the decision problem. That is, you should not take the states of the world to be simply Similar or Different, since you don't know your utilities for the different options at those states of the world. Rather, you should take them to be Similar & my utilities are given by $U_1$, Similar & my utilities are given by $U_2$, Different & my utilities are given by $U_1$, and Different & my utilities are given by $U_2$, since you do know your utilities at the utilities of the option at these states of the world---the utility of being a parent at Similar & my utilities are given by $U_2$ is 25, for instance. So the decision now looks like this:$$\begin{array}{r|cccc} & \textit{Similar}\ \&\ U_1 & \textit{Different}\ \&\ U_1 & \textit{Similar}\ \&\ U_2 & \textit{Different}\ \&\ U_2 \\ \hline \textit{Child-free} & 0 & 0 & 0 & 0\\ \textit{Parent} & 1 & 1 & 25 & -7 \end{array}$$And you should assign probabilities to these states and then apply your theory of rational choice to this newly-specified decision. I called this the fine-graining response to Paul's challenge.

So, for instance, suppose that you think Similar and Different are equally likely, and $U_1$ and $U_2$ are equally likely, and they're independent of one another; then you should assign credence $\frac{1}{4}$ to each of these states. Suppose that your theory of rational choice tells you to maximise expected utility. Then you should become a parent, because its expected utility is $\frac{1}{4}\times 1 + \frac{1}{4}\times 1 + \frac{1}{4}\times 25 + \frac{1}{4}\times (-7) = 5$, while the expected utility of the alternative is 0. 

Now, notice something about that solution in the case of expected utility theory. Suppose that, whether your utilities are given by $U_1$ or by $U_2$, your expected utility for one option is greater than your expected utility for another. That is, while you don't know what your utilities are, you do know how they'd order those two options. Then, at least assuming that you consider the original states of the world  independent of the facts about your utilities, then the fine-graining strategy I described above will also give higher expected utility to the first option than to the second.*

But I've become convinced over the past few years that expected utility theory isn't the correct theory of rational choice. Rather, it's something closer to Lara Buchak's risk-weighted expected utility theory, which builds on John Quiggin's rank-dependent utility theory. According to this theory, we don't choose between different options by comparing their expected utility, we compare their risk-weighted expected utility instead, which is calculated using our own personal risk function.

A risk function is a function $R : [0, 1] \rightarrow [0, 1]$ that is continuous, strictly increasing, and for which $R(0) = 0$ and $R(1) = 1$. Given a risk function $R$, and an option $o$ such that $U(o, s_1) \leq U(o, s_2) \leq \ldots \leq U(o, s_n)$, and where the probability of state $s_i$ is $p_i$, the risk-weighted expected utility of $o$ is

$REU_R(U(o)) = U(o, s_1) + $

$R(p_2 + \ldots + p_n)(U(o, s_2) - U(o, s_1)) + $

$R(p_3 + \ldots + p_n)(U(o, s_3) - U(o, s_2)) + \ldots +$

$R(p_{n-1} + p_n)(U(o, s_{n-1}) - U(o, s_{n-2})) + $

$R(p_n)(U(o, s_n) - U(o, s_{n-1}))$

One natural family of risk functions is $R_n(p) = p^n$ for $n>0$. If $0 < n < 1$, then $R_n$ is risk-inclined; if $1 < n$, then $R_n$ is risk-seeking; and if $n = 1$, then $R_n$ is risk-netural.

Now suppose we consider my decision whether to become a parent again from the point of view of this theory of rational choice. And let's suppose I'm quite risk-averse, with a risk function $R(p) = p^2$. Then let's begin by looking at the decision from the point of view of $U_1$ and $U_2$ separately. If your utilities are given by $U_1$, then the risk-weighted expected utility of remaining child-free is 0, and of becoming a parent is 1; if $U_2$, then the risk-weighted expected utility of remaining child-free is 0 again, but of becoming a parent is $-7+R(1/2)\times 32 = 1$. So, either way, you'll prefer to become a parent. But now consider the fine-grained version of the decision:$$\begin{array}{r|cccc} & \textit{Similar}\ \&\ U_1 & \textit{Different}\ \&\ U_1 & \textit{Similar}\ \&\ U_2 & \textit{Different}\ \&\ U_2 \\ \hline \textit{Child-free} & 0 & 0 & 0 & 0\\ \textit{Parent} & 1 & 1 & 25 & -7 \end{array}$$Then the risk-weighted expected utility of remaining child-free is still 0, but of becoming a parent is $-7 + R(3/4)\times 8 + R(1/4)\times 24 = -1$. So you prefer remaining child-free.

In general, it turns out, if you use risk-weighted expected utility theory with a risk function that isn't risk-neutral, there will be cases like this, where you are uncertain of your utilities, know that whichever are your utilities they'll prefer one option to another, but when you fine-grain the states of the world and make the decision like that, you'll prefer the second option to the first. 

It seems to me, then, that epistemically transformative experiences do pose a problem for theories of rational choice like Buchak's. It's true that the fine-graining strategy allows us to apply our theory of rational choice even when we are uncertain about our own utilities. But adopting that strategy only pushes the problem elsewhere, for it then turns out that there will be cases in which I know I prefer one option to another---since I know that, whichever of the possible utility functions I actually have, it prefers that option---but I also prefer the other option to the first---since, when I apply the fine-graining strategy to determine which option I prefer, I prefer the other. And this, it seems to me, is an untenable situation.

----------------------------------

* Suppose $U_1, \ldots, U_m$ are the possible utilities you might have, and $q_1, \ldots, q_m$ are the probabilities you might have them; and suppose $s_1, \ldots, s_n$ are the possible states of the world, and $p_1, \ldots, p_n$ are their probabilities; and suppose $U_j(o, s_i)$ is the utility of $o$ at state $s_i$ by the lights of utilities $U_j$. Then, the expected utility of an option $o$ from the point of view of the fine-grained set of states, $s_i\ \&\ U_j$, is $$\sum^m_{j=1} \sum^n_{i=1} q_jp_i U(o, s_i\ \&\ U_j)= \sum^m_{j=1} \sum^n_{i=1} q_jp_i U_j(o, s_i) = \sum^m_{j=1} q_j \sum^n_{i=1} p_i U_j(o, s_i) $$So, if $$\sum^n_{i=1} p_i U_j(o, s_i) < \sum^n_{i=1} p_i U_j(o', s_i) $$for all $j= 1, \ldots, m$, then $$\sum^m_{j=1} \sum^n_{i=1} q_jp_i U(o', s_i\ \&\ U_j) < \sum^m_{j=1} \sum^n_{i=1} q_jp_i U(o, s_i\ \&\ U_j)$$

Comments

  1. Yes, I think that's the obvious answer. Honestly,l've never understood why anyone took this as a serious challenge...like this kind of uncertainty about how much you might enjoy something was part of expected utility theory from day 1.

    However, I have a bit of a more technical question about what you mean by utility. I mean, it seems like there are two different standardish ways to use utility. As a vague term for the amount of hedonic joy an outcome brings or as just a measure of consequentialist value with the implication being that the utility valuation is filled in by the Morgenstern (von Nuemann?) existence theorem.

    However, I get a bit puzzled when people are both vague about the nature of utility (eg aren't clearly committing it to be the integral of pleasure dt) nor endorsing the definition of utility as the real valued function that satisfies the (risk-nuetral) mixed state axioms.

    ReplyDelete
    Replies
    1. To be clear I'm asking you not because I'm skeptical you have an answer but because I'm pretty sure you will (maybe it's answered in the book you linked but on my phone now and institutional access is a pain there and I wouldn't know where to look).

      Delete
    2. Thanks for this! It's a good question. I suspect that, to make sense of the idea that your utility is something about which you might be uncertain, you're going to need it to be a quantity that measures some sort of good like pleasure, but I guess it needn't be pleasure itself. I don't go into this a lot in the book. I note that we're using something like a realist view of utilities, which takes them to be real quantities, rather than simply mathematical artefacts extracted from preferences using representation theorems, but I don't say a great deal more than that.

      Delete
  2. Also, let me give you my general challenge to any non-risk nuetral account of rational choice theory: it suggests that your choices should depend on causally irrelevant facts like whether there are a huge numbers of copies of you that realize all outcomes at the relevant rates.

    Suppose instead of asking whether I should pick the pure state s0 or the mixed state p chance of s1 and (1-p) chance of s2 (picked so a risk nuetral theory favors the second but risk weighted favors first) I'm instead told that there are a huge (as in busy beaver of a billion) copies of me and I'm deciding whether all of those copies should enjoy state s0 or that a fraction p1 of them enjoy s1 and a fraction p2 enjoy s2.

    Since the later choice involves no risk at all presumably (by assumption) I should pick the version that realizes s1 and s2 at the relevant rates. But this leads to some really crazy implications: eg that what I should rationally prefer depends on whether or not I believe something like quantum many minds is true [1]. Also, it suggests that if I'm unsure if I'm actually in the matrix and the sys admins are just running me through the simulation repeatedly but wiping my memory each time that should affect my rational choice (this avoids any concern that those other quantum branches aren't really you).

    I think the best option to deal with this is just to accept the usual risk-nuetral theory but inform it by the strong psychological reactions we have to risk that we are aware of. That reconciles the concerns because it explains why in actual life we do care about risk while avoiding these issues. The reason we pick the pure state isn't the effect of risk but the effect of our awareness of the risk on the resulting utilities.


    1: suppose that, as is an open possibility in physics, there are only a very very large but finite number of dimensions in the Hilbert space for our universe so we avoid any problems with infinity.

    ReplyDelete
    Replies
    1. Also, this deals with the much more realistic objection regarding making choices for someone else. We'd like to say that if I'm making a decision for a future generation that we are doing best for them if we make the same choice they would rationally favor. But if we have a non-risk nuetral theory this leads to some weird outcomes. For instance, suppose you are deciding between two sperm donors one of which has no particular pluses or minuses while the other has some chance of giving the child a gene that causes severe depression but also a chance of giving the child a gene that makes them particularly happy (replace w/ whatever traits add/subtract utility in your mind).

      Now, in a risk-nuetral theory you can just sum up the expected utilities. In a non-nuetral theory it seems like we have to ask whether or not to consider the potential children with different genes the same individual or not since, if not, from their POV, there is no risk while if they are you must take that risk into account. If so there is.

      Delete
    2. This is great! I guess it has some similarities to an objection that Buchak herself considers, where she points out that, if you know you're going to face a decision again and again over the course of your life, with the same probabilities, and you'll make the same choice every time, even her theory says that you should choose in line with expected utility theory. I think my own response is that it's fine for the theory to tell you do behave differently depending on whether you think you're choosing on behalf of countless duplicates, or whether you think you're choosing just for yourself.

      Delete
  3. This reminds me of the section in van Fraassen's Laws and Symmetry where he discusses how Simpson's-Paradox-style correlation reversals can yield incorrect recommendations when marginalizing. I liked the way he phrased it in terms of taking advice from a panel of experts vs. taking advice from a single expert made by a mixture of the panel. In both the Simpson's paradox example and the risk-weighting example, the decisions suggested by individual experts are not guaranteed to survive aggregation. It makes me wonder about how rational we are even when problems don't arise. Having a voting pool of one future self instead of multiple doesn't mean that we have a principled way of avoiding judgement aggregation issues—it just means we got lucky.

    ReplyDelete
  4. Very informative idea. There's a lot that can learn here. Thank you so much!

    ReplyDelete
  5. I want to encourage you to continue this great blog writing, have a nice day!

    ReplyDelete
  6. I admire this article for the well-researched content and excellent wording.

    ReplyDelete
  7. Great article!! I am impressed with this blog work and skill. Thank you

    ReplyDelete
  8. Very informative and well-written post! Greatjob for your hardwork man!

    ReplyDelete
  9. I am really impressed with this blog article, Keep it up! Keep writing bloggg

    ReplyDelete

Post a Comment