Utilitarianism and risk: a reply to Mogensen

Like Lara Buchak, I think rationality permits many different attitudes to risk: without ever falling into irrationality, you might be extremely risk-averse, quite risk-averse, just a little risk-averse, risk-neutral, a teeny bit risk-inclined, very risk-inclined, exorbitantly risk-inclined, and many points in between. To illustrate, suppose you are offered a choice between two units of utility for sure (Sure Thing in the pay-off table below) and a gamble that gives a 50% chance of five units of utility and 50% chance of none (Gamble below). Then I think rationality permits you to be risk-averse and choose the sure thing, and it permits you to be risk-neutral and choose the gamble, in line with expected utility theory. 

$$\begin{array}{r|cc}& \text{Heads} & \text{Tails} \\ \hline \text{Sure Thing} & 2 & 2 \\ \text{Gamble} & 5 & 0 \end{array}$$

This raises an interesting question for utilitarians. In what follows, I'll focus on total utilitarianism, in particular. This says the best action is the one that produces the greatest total aggregate utility. However, this is not a complete theory of moral action, since we are often uncertain which action produces the greatest total utility. Instead, we assign probabilities to the different outcomes of the different possible actions. But how are we to use these, together with the total aggregate utility in each outcome, to determine which action to choose? Standardly, utilitarians appeal to expected utility theory: the morally right action is the one that produces the greatest total aggregate utility in expectation; that is, we take each possible action, take each of its outcomes, take the total utility of that outcome and weight it by the probability the action brings about the outcome, sum up these weighted utilities to give the expectation of the action's total utility, and then pick an action whose expectation is maximal. But, if many different attitudes to risk are permitted, and rationality does not require you to maximise expected utility, then why should morality require this? But if morality doesn't require this, what does it require? Is there a particular attitude to risk you should adopt when choosing morally? Or is any rationally permitted attitude also morally permitted?

Local risk-taker

Again like Lara Buchak, I think that, when choosing an action that will affect a particular group of people, the attitude to risk you should use is determined by their attitudes to risk. In a straightforward case, for instance, if they all share the same attitudes to risk and you know this, then you should also use those attitudes to risk when choosing your action. Let's call this the Risk Principle.

I have recently appealed to something like the Risk Principle to argue that, contrary to the claims of longtermists, total utilitarianism demands not that we devote our resources to ensuring a long future for humanity, but rather that we should use them to hasten human extinction---the argument is essentially that extinction is the less risky option, since a long future for humanity could contain vast amounts of pleasure and happiness, but it could also contain unfathomable amounts of pain and suffering; and we should choose using risk-averse preferences since these are predominant in our current society, and likely to continue to be predominant. Andreas Mogensen has since responded to my argument by highlighting a tension between the way in which Buchak argues for the Risk Principle and the version of total utilitarianism that results from adding the principle to it.

Mogensen notes that Buchak argues for the principle by appealing to considerations that are more typically associated with Scanlon's version of contractualism. He writes:

"[The principle] is motivated by the thought that when choosing for others, we should err against subjecting people to risks we’re not sure they would take on their own behalf. Thus, Buchak holds that 'we cannot choose a more-than-minimally risky gamble for another person unless we have some reason to think that he would take that gamble himself' (Buchak 2019: 74). The ideal of justifiability to each individual is also taken to support [the principle], in the form of the idea that we should 'take only the risks that no one could reasonably reject.' (ibid.)"

And then, at least as I understand him, he makes two claims.

First, if a consequentialist appeals to contractualist considerations, such as whether you can justify your decision to each of the people it affects, then this must be because they must value something beyond the welfare of those people; they must also assign value to being able to justify the decision to them. Call this the Extra Source of Value Objection.

Second, he notes that the version of total utilitarianism we obtain when we append the Risk Principle to it in fact runs contrary to the contractualist norms we used to motivate that principle. Call this the Self-Undermining Objection

Let's take them in turn. Of course, it's true that consequentialists, and certainly total utilitarians, don't usually think there is any component of an action that must be justifiable to the individuals it will affect before it can count as moral. But that's because, for the most part, they've worked in a framework that hasn't really reckoned with permissivism about rationality, and so there has not really been any room for it. For even before we reckon with permissivism about rational risk attitudes, a problem arises for utilitarians because of permissivism about rational belief. 

Suppose we must choose between the two options, Sure Thing and Gamble, which will affect two individuals, Asa and Bea. The total aggregate utility of the options is given in the table below. The outcome of Gamble is determined by whether the FTSE 100 stock index rises or not.

$$\begin{array}{r|cc}& \text{FTSE rises} & \text{FTSE doesn't rise} \\ \hline \text{Sure Thing} & 2 & 2 \\ \text{Gamble} & 5 & 0 \end{array}$$

We will choose by maximising expected total aggregate utility. Asa thinks it is 50% likely the FTSE will rise, and 50% it won't, and so prefers Gamble to Sure Thing. Bea is less bullish, thinking it is only 25% likely the stock index will rise, and 75% it won't, and so prefers Sure Thing to Gamble. They disagree about how likely the two states of the world are, but this is not because they have different evidence: they don't. Rather, it's because they began their epistemic lives with different prior probabilities. What's more, for both of them, the priors with which they began were rationally permitted. Now it is our job to choose between the two options on their behalf. How should we choose? Utilitarianism is silent. It tells us how to choose once we've fixed our probabilities over the future of the stock index. But it doesn't tell us how to do this. No considerations of total welfare tell us what to do here, because this isn't about the ends we seek, which utilitarianism specifies clearly, but rather about the means we take to achieve them, about which it says nothing. At this point, then, we might bring in considerations more usually associated with contractualism. We might say that, in our calculation of the expected utilities, we should use probabilities that we can justify to each of the individuals affected: perhaps some sort of aggregate of their probabilities. So, for instance, if they were all to agree on the probabilities they assign, while I, as decision maker, assign different ones, then providing theirs are rationally permissible and based on all the evidence available, we should use theirs, since only by doing this can we justify our choice of probabilities to them. Nothing here has added anything to our assessment of the value of the outcomes: those are still valued precisely at their total aggregate welfare.

The same might be said, suitably adapted, about moral decisions in the face of permissivism about rational attitudes to risk. We might imagine instead that Asa and Bea agree that it's 50-50 whether the FTSE will rise or not, but Asa is risk-neutral, and thus prefers Gamble to Sure Thing, while Bea is sufficiently risk-averse that she prefers Sure Thing to Gamble. Again, we must ask how to choose? And again utilitarianism is silent because we are asking not about the ends we seek, which total utilitarianism fixes for us, but about the means by which we pursue those ends, which it doesn't. And so again we might ask which risk attitudes we can justify using to all affected parties without creating any tension with the core commitments of total utilitarianism. So I think the Extra Source of Value Objection fails.

Let's turn now to the Self-Undermining Objection. Suppose we are again choosing between two options on behalf of Asa and Bea. This time, the pay-off table looks like this, where we specify not only the total aggregate utility, but also the individual utilities that we add together to give it:

$$\begin{array}{r|cc}& \text{FTSE rises} & \text{FTSE doesn't rise} \\ \hline \text{Sure Thing} & \text{Asa:} 2 & \text{Asa:} 2 \\ & \text{Bea:} 2 & \text{Bea:} 2 \\ \text{Gamble} & \text{Asa:} 5 & \text{Asa:} 0 \\ & \text{Bea:} 0 & \text{Bea:} 5 \end{array}$$

Both Asa and Bea are risk-averse, and so, thinking purely of their own utility, they prefer Sure Thing to Gamble. However, from the point of view of total utility, Gamble strictly dominates Sure Thing: it obtains five units of utility in total, regardless of how the FTSE performs, while Sure Thing obtains only four. So, from the point of view of total utility, whichever attitudes to risk we use, we will pick Gamble, going against the unanimous preferences of those affected. But surely this is not a choice that can be justified to each affected party?

But while I agree that some of what Buchak writes in defence of the Risk Principle can be read in a way that seems in tension with this conclusion, I think there's another way to think about it. As noted in response to the Extra Source of Value Objection, total utilitarianism gives an account of what is valuable, but, at least if you think that there are many rational prior probabilities or many rational attitudes to risk, it doesn't give an account of how to choose morally when you're uncertain how the world is, because it doesn't tell you which probabilities or which attitudes to risk to use when you make your decision. It is the choice of these that we should be able to justify to the individuals affected, not the choice itself, or at least not primarily. If we can do that, then we justify the choice we make to them by saying: I think the value of an outcome is its total utility; and I needed to fix on a single probability function and a single set of attitudes to risk in order to choose between options given this account of final value; and I used these probabilities and these attitudes to risk, which I have justified to you. Now, they might retort: But you've chosen one option when we all preferred the other! But to that we can respond: Ah, true, but you were not considering the same decision problem I was. For you, the value of each outcome was the utility it obtained for you; for me, it was the total utility it obtained for the population. Now, it is of course surprising that these two things come apart: on the one hand, choosing when you value an outcome for its total welfare; on the other, preserving unanimous preferences. But that's just the way things have to shake out if we think rationality is permissive.

So, in the end, I think both of Mogensen's objections can be answered once we understand better how we appeal to the contractualist idea to justify something like the Risk Principle. 

Comments