Choosing for others when you don't know their attitudes to risk

 A PDF of this blogpost is available here.

[This blogpost is closely related to the one from last week. In this one, I ask how someone should choose on behalf of a group of people when she doesn't know their attitudes to risk; in the previous one, I asked how someone should choose for themselves when one of their decisions might cause them to change their attitudes to risk. Many of the same considerations carry over from the previous post to this one, and so I apologise for some repetition.]

A new virus has emerged and it is spreading at speed through the population of your country. As the person charged with public health policy, it falls to you to choose which measures to take. Yet there is much you don't know about the virus at this early stage, and therefore much you don't know about the effects of the different possible measures you might take. You don't know how severe the virus is, for instance. It might typically cause mild illness and very few deaths, but it might be much more dangerous than that. Initial data suggests it's mild, but it isn't definitive.

As well as your ignorance of these medical facts about the virus, there are also facts you don't know about the population that will be affected by whatever measures you impose. You might not know, for instance, the value that each person in the country assigns to the different possible outcomes of those measures. You might not know how much people value being able to gather in groups during the winter months, either to chat with friends, look in on vulnerable family, stave off loneliness, or attend communal worship as part of their religion. You might not know how much people disvalue feeling ill, suffering the long-term effects of a virus, or seeing their children's education stall and their anxieties worsen.

Is he risk-averse or risk-neutral? Answer unknown.

If this is the only source of your uncertainty, then a natural solution presents itself, at least in principle: you add your uncertainty about the public's utilities for the various outcomes of the various possible measures into your specification of the decision you face, and you maximise expected utility. Simplifying the choice greatly, we might present it as follows: you think it's 25% likely that the virus is severe, and 75% likely it's mild; you have two public health measures at your disposal, impose restrictions or don't; if the virus is severe, restrictions work pretty well at stemming its spread, and if you impose none it's a disaster; if the virus is mild, however, restrictions create problems and things go badly, though they aren't a disaster, and if you impose none, then everything goes pretty well. Perhaps you know that the utilities the public assigns to the four possible outcomes are given either by the first table below (U1), or by the second (U2):$$\begin{array}{r|cc} \textbf{U1} & \text{Mild} & \text{Severe} \\ & \textit{Probability = 1/4} & \textit{Probability = 3/4} \\ \hline \text{Restrictions}  & 10 & 7\\  \text{No Restrictions}  & 2 & 10 \end{array}$$

$$\begin{array}{r|cc} \textbf{U2} & \text{Mild} & \text{Severe} \\ & \textit{Probability = 1/4} & \textit{Probability = 3/4} \\ \hline \text{Restrictions}  & 12 & 7\\  \text{No Restrictions}  & 0 & 10 \end{array}$$

If it's the former, then you maximise expected utility by imposing no restrictions; if it's the latter, you do so by imposing restrictions. So how to choose? Well, you incorporate the uncertainty about the public's utilities into the decision problem. Perhaps you think each is as likely as the other.$$ \begin{array}{r|cccc} & \text{Mild + U1} & \text{Severe + U1} & \text{Mild + U2} & \text{Severe + U2} \\ & \textit{1/8} & \textit{3/8} & \textit{1/8} & \textit{3/8} \\ \hline \text{Restrictions}  & 10 & 7 & 12 & 7\\  \text{No Restrictions}  & 2 & 10 & 0 & 10 \end{array}$$And again we compare the expected utilities of the two measures: and this time the restrictions win out.

But let's suppose there is no uncertainty about how the population values the possible outcomes: you're sure it's as described in the first table above (U1). Nonetheless, there is another relevant fact you don't know about the attitudes in the population. You don't know their attitudes to risk.

Notice that the option on which you impose no restrictions is a risky one. It's true that, in the most probable state of the world, where the virus is mild, this turns out very well. But in the other state, where the virus is severe, it turns out very badly. Notice that, as a result, the expected utility of imposing no restrictions only just exceeds the expected utility of imposing some; reduce the utility of no restrictions in the severe case by just one unit of utility and the expectations are equal; reduce it by two, and the restrictions win out.

This is relevant because some people are risk-averse. That is, they give more weight to the worst-case outcomes than expected utility says they should. If offered the choice between a single unit of utility for sure, and the toss of a fair coin that gives three units if heads and none if tails, people will often choose the guaranteed single unit, even though the coin toss has higher expected utility. Plausibly, this is because the worst-case scenario looms larger in their calculation than expected utility theory allows. When assessing the value of the coin toss, the case in which it lands tails and they receive nothing gets more weight than the case in which it lands heads and they receive three units of utility, even though those two cases have the same probability. What's more, many think such behaviour is perfectly rational---and if it is, then it is something that a public health policy-maker should take into account.

Let me pause briefly to describe one way in which we might incorporate our own attitudes to risk into our personal decision-making in a rational way. It's due to Lara Buchak, following John Quiggin's earlier work. In expected utility theory, we take the value of an option to be its expected utility and we choose the most valuable option; in Buchak's risk-weighted expected utility theory, we take the value of an option to be its risk-weighted expected utility and we choose the most valuable option. I'll describe how that works for an option $o$ that is defined on just two states of the world, $s_1$ and $s_2$, where the probability of $s_i$ is $p_i$, and the utility of $o$ at $s_i$ is $u_i$. The expected utility of $o$ is $$EU(o) = p_1u_1 + p_2u_2$$. If $u_1 \leq u_2$, this can also be written as $$EU(o) = u_1 + p_2(u_2-u_1)$$That is, to calculate the expected utility of $o$, you take the utility it will gain for you in the worst-case scenario (in this case, $u_1$) and add to it the extra utility you will gain from it in the best-case scenario, weighted by the probability you're in the best-case scenario. Similarly, if $u_2 \leq u_1$, it can be written as $$EU(o) = u_2 + p_1(u_1-u_2)$$You calculate the risk-weighted expected utility in the same way, except that you transform the probability of the best-case scenario using a function, $R$, which Buchak calls your risk function, which encodes your attitudes to risk. So, if $u_1 \leq u_2$, then $$REU_R(o) = u_1 + R(p_2)(u_2 - u_1)$$ And, if $u_2 \leq u_1$, then $$REU_R(o) = u_2 + R(p_1)(u_1-u_2)$$

To illustrate with the choice between restrictions and none:$$\begin{array}{rcl} REU(\text{restrictions}) & = & 7 + R(1/4)(10-7) \\ REU(\text{no restrictions}) & = & 2 + R(3/4)(10-2) \end{array} $$So suppose you are risk-averse, and so wish to place less weight on the best-case scenarios than expected utility requires. That is, $R(1/4) < 1/4$ and $R(3/4) < 3/4$. Perhaps, for instance, $R(1/4) = 1/8$ and $R(3/4) = 5/8$. Then$$\begin{array}{rcl} REU(\text{restrictions}) & = & 7 + (1/8)(10-7) = 7.375\\ REU(\text{no restrictions}) & = & 2 + (5/8)(10-2) = 7 \end{array} $$So you prefer the restrictions. On the other hand, if you're risk-neutral, and follow expected utility theory, in which case your risk function is just $R(p) = p$ for all $0 \leq p \leq 1$, then you prefer no restrictions.

One of the deepest lessons of Buchak's analysis is that two people who agree exactly on how likely each state of the world is, and agree exactly on how valuable each outcome would be, can disagree rationally about what to do. They can do this because they have different attitudes to risk, those different attitudes can be rational, and those attitudes lead them to combine the relevant probabilities and utilities in different ways to give the value of the options under consideration.

Why is this an important lesson? Because ignoring it leads to a breakdown in debate and public deliberation. Often, during the COVID-19 pandemic, participants on both sides of debates about the wisdom of restrictions seemed to assume that any disagreement about what to do must arise from disagreement about the probabilities or about the utilities or simply from the irrationality of their interlocutors. If they felt it was a disagreement over probabilities, they often dismissed their opponents as stupid or ignorant; if they felt it was a disagreement about utilities, they concluded their opponent was selfish or callous. But there was always a further possibility, namely, attitudes to risk, and it seemed to me that often this was really the source of the disagreement.

But it's not my purpose here to diagnose a common failure of pandemic discussions. Instead, I want to draw attention to a problem that arises when we don't know the risk attitudes of the people who will be affected by our decision. To make the presentation simple, let's idealise enormously and suppose that everyone in the population agrees on the probabilities, the utilities, and their attitudes to risk. We know their probabilities and their utilities---they're the ones given in the following table (U1):$$\begin{array}{r|cc} & \text{Mild} & \text{Severe} \\ & \textit{Probability = 1/4} & \textit{Probability = 3/4} \\ \hline \text{Restrictions}  & 10 & 7\\  \text{No Restrictions}  & 2 & 10 \end{array}$$But we don't know their shared attitudes to risk. They're either risk-neutral, choosing by maximizing expected utility, in which case they prefer no restrictions to restrictions; or they're risk-averse to the extent that they prefer the restrictions. In such a case, how should we choose?

A natural first thought is that we might do as I described above when I asked how to choose when you're unsure of the utilities assigned to the outcomes by those affected. In that case, you simply incorporate your uncertainty about the utilities into your decision problem. And so here we might hope to incorporate your uncertainty about the risk attitudes into your decision problem.

The problem is that, at least on Buchak's account, your attitudes to risk determine the ways you think probabilities and utilities should be combined to give an evaluation of an option. Even if we include our uncertainty about the attitudes to risk into our specification of the decision, so that each state of the world specifies not only whether the virus is mild or severe, but also whether the population is risk-averse or risk-neutral, in order to evaluate an option, you must use a risk function to bring the probabilities of these more fine-grained states of the world together with the utilities of an option at those states together to give a value for the option. And which option is evaluated as better depends on which risk attitudes you use to do this.

An analogy might be helpful at this point. Consider a classic case of decision-making under normative uncertainty: you don't know whether utilitarianism or Kantianism is the true moral theory. You're 80% sure it's the former and 20% the latter. Now imagine you're faced with a binary choice and utilitarianism says one option is morally required, while Kantianism says it's the other. How should you proceed? One natural proposal: if we can ask each moral theory how much value they assign to each option at each state of the world, then we can calculate our expected value for each option, taking into account both uncertainty about the world and uncertainty about the normative facts. The problem is that, while utilitarianism would likely think this is the morally right way to make this meta-decision, Kantianism wouldn't. The problem is structurally the same as in the case of uncertainty about risk attitudes: in both cases, one of the things we're uncertain about is the right way to make the sort of decision in question.

This suggests that the literature on normative uncertainty would be a good place to look for a solution to this problem, and indeed you can easily see how putative solutions to that problem might translate into putative solutions to ours: for instance, we might go with the My Favourite Theory proposal and use the risk attitudes that we consider it most likely the population has. But the problems that arise for that arise here as well. In any case, I'll leave this line of investigation until I'm better acquainted with that literature. For the time being, let me wrap up by noting a putative solution that I think won't work.

Inspired by Pietro Cibinel's intriguing recent paper, we might try a contractualist approach to the problem. The idea is that, when someone else chooses on your behalf, depending on what they choose and how things turn out, you might have a legitimate complaint about the choice they make, and it seems a reasonable principle to try to choose in a way that minimizes such complaints across the population for whom you're choosing. To apply this idea, we have to say when you have a legitimate complaint about a decision made on your behalf, and how strong it is. Here is Cibinel's account, which I find compelling: you don't have a legitimate complaint when you'd have chosen the same option; and you don't have one when you'd have chosen a different option, but that different option would have left you worse off than the one that was chosen; but you do have a legitimate complaint when you'd have chosen differently, and your choice would have left you better off than the choice that was made---furthermore, the strength of the complaint is proportional to how much better off you'd have been had they chosen the option you favoured. Based on this, we have the following set of complaints in the different possible states of the world, where 'R-a' means 'risk-averse', and 'R-n' means 'risk-neutral':$$ \begin{array}{r|cccc} & \text{Mild + R-a} & \text{Severe + R-a} & \text{Mild + R-n} & \text{Severe + R-n} \\ & \textit{1/8} & \textit{3/8} & \textit{1/8} & \textit{3/8} \\ \hline \text{Restrictions}  & 0 & 0 & 0 & 3\\  \text{No Restrictions}  & 8 & 0 & 0 & 0 \end{array}$$ In this situation, how should we choose?

One initial thought might be that you should choose the option with the lowest worst complaint: that is, we should choose the restrictions because, in its worst-case scenario, it generates a complaint of strength 3, while in its worst-case scenario, imposing no restrictions generates a complaint of strength 8. The problem with this is that it isn't sensitive to probabilities: yes, imposing no restrictions might generate a worse complaint than imposing the restrictions might, but it would do so with considerably lower probability. But if we are to choose in a way that is sensitive to the probabilities, our problem arises again: we must combine the probabilities of different states of the world with something like the utilities of choosing particular options at those states---the utility of a complaint is the negative of its strength, say---and different risk attitudes demand different ways of doing that. If we rank the options by the risk-weighted expectation of the legitimate complaints they generate, we see that a risk-neutral person would prefer no restrictions, while a sufficiently risk-averse person will favour restrictions.$$ \begin{array}{rcl} REU(\text{restrictions}) & = & -3 + R(5/8)\times 3\\ REU(\text{no restrictions}) & = & -8 + R(7/8) \times 8 \end{array} $$ So, if $R(5/8) = 4/8$ and $R(7/8) = 6/8$, then restrictions exceeds no restrictions, but if $R(5/8) = 5/8$ and $R(7/8) = 7/8$, then no restrictions wins out.

What to do? I don't know. It's an instance of a more general problem. When you face a decision under uncertainty, you are trying to choose the best means to your ends. But there might be reasonable disagreement about how to do this. Different people might have different favoured ways of evaluating the available means to their ends, even when the ends themselves are shared. If you are charged with making a decision on behalf of a group of people in which there is such disagreement, or in which there is in fact agreement but you are uncertain of which approach they all favour, you must choose how to choose on their behalf. But the different ways of choosing means to ends might also give different conclusions about how to make the meta-decision about how to choose on their behalf. And then it is unclear how you should proceed. This is a social choice problem that often receives less attention. 


Comments

  1. This isn’t about your problem, Richard, but about the prior assumption that risk aversion/appetite can be rational, as in Buchak (i’ve been discussing this with my colleague, Nick Makins). I can see, of course, that utilities might not be linearly related to e.g. money, and that some might dislike/like the anxiety/excitement occasioned by risk. But after all that kind of thing has been taken into account, is there any remaining rational room to avoid/pursue risk? Nick says that there’s nothing to stop people putting the risks into the outcome utilities, so to speak, and I see the point. But I can’t help feeling that doing so is in tension with the whole project of choosing means to ends under conditions of risk. No need to answer in detail. I presume this must have been thrashed out in the literature. I’d be grateful just for a pointer to the relevant reading.

    ReplyDelete
    Replies
    1. Hi David, Thanks for this! So Buchak herself considers this possibility in Risk and Rationality, and comes to a similar conclusion to yours, namely, that if you do this you aren't giving a proper account of means-ends reasoning, since you're somehow attaching value to acquiring means by certain ends and not by others. But Orri Stefansson and Richard Bradley have a series of papers where they try to make out the sort of account that Nick Makins is describing: this is probably the place to start - https://www.journals.uchicago.edu/doi/full/10.1093/bjps/axx035. Maybe we'll get a chance to chat about it next week at King's!

      Delete
  2. Ah. I've just seen the abstract for your talk at King's next week. I like the look of that. (You say: 'I'll argue it is a necessary but not sufficient condition on an adequate decision theory that it is self-recommending. I show that expected utility theory is self-recommending, but its most popular rivals are not.')

    ReplyDelete
  3. I enjoyed learning more about this topic. Nice post, thanks have a nice day

    ReplyDelete
  4. I read a lot more here, Please do keep up the excellent job you've done

    ReplyDelete
  5. Nice post. It’s always interesting to read articles here. you really great

    ReplyDelete
  6. Look forward to seeking more of this fantastic post. Thanks for this!

    ReplyDelete

Post a Comment