Teru Thomas on the Veil of Ignorance

Before you is a range of options: perhaps they are different laws you might implement in the country you govern, or institutions you might inaugurate; perhaps they are public health measures you might implement, or strategies for combating the climate crisis. Whatever they are, there is a population they will affect, and there is some uncertainty about how well-off each option will leave each person in that population. For each way the world might be, you know how well-off each option will leave each person if it is that way; and you have probabilities for each way the world might be. How should you choose between the options?

An old idea is that you should choose for this population as you would if you were choosing for yourself from behind a veil of ignorance. That is, you should reduce this social choice scenario to an individual choice scenario as follows: assume you are in the population, but completely uncertain who in the population you are; and then you choose as a rational individual would choose in that situation. That is, the correct social choice is the correct individual choice made behind the veil of ignorance. Rawls adopts this idea, and thinks that the correct individual choice in that situation is the one whose worst-case scenario is best; Harsanyi, on the other hand, thinks the correct individual choice is the one that maximises expected utility; and Lara Buchak thinks it's the one that maximises risk-weighted expected utility by the lights of the most risk-averse reasonable risk attitudes.

Bluebell hiding beneath the veil of ignorance

It's a nice idea, and intuitively it captures something of what we mean by picking the fairest option. But is there anything inevitable about it? Drawing on his earlier work with David McCarthy and Kalle Mikkola, Teru Thomas has offered an argument that there is. It's an extremely elegant argument, so I thought it might be worth doing a post to advertise it, and to walk through the essential steps. 

The structure of the argument is this: Thomas lays down three principles and shows that it follows from those alone that you should choose in the social case as you would choose in the individual case from behind the veil of ignorance. The three principles are all invariance principles: that is, they say that when two social choice situations are similar in a certain way, you should choose similarly in both. I'll illustrate the argument with the simplest example that can illustrate the reasoning: that is, one where there are just three individuals in the population, Ada, Bab, and Cam, only two possible states of the world, $s_1$ and $s_1$, and two options, $o_1$ and $o_2$, from which we must choose. We will transform the original social choice problem into the associated individual choice problem behind the veil of ignorance in three steps; and, at each step, one of the invariance principles will tell us that the options that are permissible in the transformed choice problem are those that correspond to the options that are permissible in the original choice problem.

So let's start with the original social choice problem, which we write in the following payoff matrix, where $(u, v, w)$ is the outcome in which Ada gets utility $u$, Bab gets $v$, and Cam gets $w$, and where the bottom row gives the probabilities of the states of the world. $$\begin{array}{r|cc} & s_1 & s_2 \\ \hline o_1 & (u_{11}, v_{11}, w_{11}) & (u_{12}, v_{12}, w_{12}) \\o_2 & (u_{21}, v_{21},w_{21}) & (u_{22}, v_{22}, w_{w22}) \\ \hline P & p_1 & p_2 \end{array}$$

We begin by describing the states of the world in a more fine-grained way that doesn't affect how well-off each person is at that state of the world. Indeed, we simply assume that, at each state of the world, there is an ordered list of the people in the population. In our case, where the population comprises just Ada, Bab, and Cam, there are six such lists: Ada-Bab-Cam; Ada-Cam-Bab; ...; Cam-Ada-Bab; Cam-Bab-Ada. So there are twelve fine-grained states of the world:

  • a version of $s_1$ that contains the list Ada-Bab-Cam, which we write $s^{ABC}_1$;
  • a version of $s_1$ that contains the list Ada-Cam-Bab, which we write $s^{ACB}_1$;
  • and so on.
As we said, these lists make no difference to how well-off Ada, Bab, or Cam is at any state of the world, and we consider each equally likely given a state of the world, and so the new pay-off matrix is this:$$\begin{array}{r|cccccc} & s^{ABC}_1 & s^{ACB}_1 & s^{BAC}_1 & s^{BCA}_1& s^{CAB}_1 & s^{CBA}_1 \\ \hline o'_1 & (u_{11}, v_{11}, w_{11})  & (u_{11}, v_{11}, w_{11})& (u_{11}, v_{11}, w_{11}) & (u_{11}, v_{11}, w_{11})& (u_{11}, v_{11}, w_{11}) & (u_{11}, v_{11}, w_{11}) \\ o'_2 & (u_{21}, v_{21}, w_{21})   & (u_{21}, v_{21}, w_{21}) & (u_{21}, v_{21}, w_{21}) & (u_{21}, v_{21}, w_{21})& (u_{21}, v_{21}, w_{21}) & (u_{21}, v_{21}, w_{21}) \\ \hline P & p_1/6 & p_1/6& p_1/6& p_1/6& p_1/6& p_1/6\end{array}$$

$$\begin{array}{r|cccccc} & s^{ABC}_2 & s^{ACB}_2 & s^{BAC}_2 & s^{BCA}_2& s^{CAB}_2 & s^{CBA}_2 \\ \hline o'_1 & (u_{12}, v_{12}, w_{12})  & (u_{12}, v_{12}, w_{12}) & (u_{12}, v_{12}, w_{12}) & (u_{12}, v_{12}, w_{12})& (u_{12}, v_{12}, w_{12}) & (u_{12}, v_{12}, w_{12}) \\ o'_2 & (u_{22}, v_{22}, w_{22})   & (u_{22}, v_{22}, w_{22}) & (u_{22}, v_{22}, w_{22}) & (u_{22}, v_{22}, w_{22})& (u_{22}, v_{22}, w_{22}) & (u_{22}, v_{22}, w_{22})\\ \hline P & p_2/6 & p_2/6& p_2/6& p_2/6& p_2/6& p_2/6 \end{array}$$

According to Thomas's first invariance principle, option $o'_i$ is permissible in this new choice situation iff $o_i$ is permissible in the old one. And in general, any fine-graining of the states that does not affect the utilities each individual gets should not affect which options are permissible. Thomas calls this refinement invariance.

The next thing to do is to provide a choice problem in which these lists do make a difference. Indeed, they determine who gets which utilities at a state of the world. So if, at state $s_i$ when the list is Ada-Bab-Cam, Ada gets utility $u$, Bab gets $v$, and Cam get $w$ then, at that state when the list is Bab-Cam-Ada, for instance, Bab gets utility $u$ and Cam gets $v$ and Ada gets $w$. That is, we have a new choice problem, where the probabilities remain the same as before:$$\begin{array}{r|cccccc} & s^{ABC}_1 & s^{ACB}_1 & s^{BAC}_1 & s^{BCA}_1& s^{CAB}_1 & s^{CBA}_1 \\ \hline o''_1 & (u_{11}, v_{11}, w_{11})  & (u_{11}, w_{11}, v_{11})& (v_{11}, u_{11}, w_{11}) & (v_{11}, w_{11}, u_{11})& (w_{11}, u_{11}, v_{11}) & (w_{11}, v_{11}, u_{11}) \\ o''_2 & (u_{21}, v_{21}, w_{21})   & (u_{21}, w_{21}, v_{21}) & (v_{21}, u_{21}, w_{21}) & (v_{21}, w_{21}, u_{21})& (w_{21}, u_{21}, v_{21}) & (w_{21}, v_{21}, u_{21}) \\ \hline P & p_1/6 & p_1/6& p_1/6& p_1/6& p_1/6& p_1/6\end{array}$$


$$\begin{array}{r|cccccc} & s^{ABC}_2 & s^{ACB}_2 & s^{BAC}_2 & s^{BCA}_2& s^{CAB}_2 & s^{CBA}_2 \\ \hline o''_1 & (u_{12}, v_{12}, w_{12})  & (u_{12}, w_{12}, v_{12}) & (v_{12}, u_{12}, w_{12}) & (v_{12}, w_{12}, u_{12})& (w_{12}, u_{12}, v_{12}) & (w_{12}, v_{12}, u_{12}) \\ o''_2 & (u_{22}, v_{22}, w_{22})   & (u_{22}, w_{22}, v_{22}) & (v_{22}, u_{22}, w_{22}) & (v_{22}, w_{22}, u_{22})& (w_{22}, u_{22}, v_{22}) & (w_{22}, v_{22}, u_{22})\\ \hline P & p_2/6 & p_2/6& p_2/6& p_2/6& p_2/6& p_2/6 \end{array}$$

According to Thomas's second invariance principle, option $o''_i$ is permissible in this new choice situation iff $o'_i$ is permissible in the previous one. To state the general version, we need a little terminology: given a choice problem, an individual's predicament in a state of the world relative to that choice problem is just the list of utilities they would get from the different options available in that choice problem at that state of the world. This second invariance principle says that, if two choice problems have the same populations and the same states of the world, and if at a given state of the world the number of people in a given predicament is the same in the two choice problems, then the same options should be permissible. Thomas calls this statewise invariance. The utilities obtained for the individuals by $o'_i$ differ from those obtained by $o''_i$ only in who gets what, but this preserves how many are in a given predicament, and so statewise invariance secures the conclusion.

The final move is to compare the social choice problem we've just described with the individual choice problem behind the veil of ignorance that is extracted from the original social choice problem. In this problem, our chooser is not only ignorant of the state of the world, but also ignorant of who they are. We'll call our chooser Deb. So there are six states:

  • they're Ada in $s_1$, which we write $s^A_1$;
  • they're Bab in $s_1$, which we write $s^B_1$;
  • they're Cam in $s_1$, which we write $s^C_1$;
  • and so on.
And given a state of the world, it is equally likely our chooser is Ada, Bab, or Cam. So the choice problem is this:$$\begin{array}{r|cccccc} & s^A_1 & s^B_1 & s^C_1 & s^A_2 & s^B_2 & s^C_2 \\ \hline o^*_1 & u_{11} & v_{11} & w_{11}&  u_{12} & v_{12} & w_{12} \\o^*_2 & u_{21} & v_{21} & w_{12} & u_{22} & v_{22} &w_{22} \\ \hline P & p_1/3& p_1/3& p_1/3& p_2/3& p_2/3& p_2/3 \end{array}$$

According to Thomas's third and final invariance principle, option $o^*_i$ is permissible iff $o''_i$ is permissible. Take two social choice problems, possibly with different populations and states, but with the same number of options. And now suppose that, for any given predicament, the probability of being in a state at which that is your predicament is the same for any individual in one population and any individual in the other. Then the same options should be permissible. Thomas calls this personwise invariance

Let's see that why this holds between the veil of ignorance decision between $o^*_1$ and $o^*_2$, and the social choice between $o''_1$ and $o''_2$. From the first decision, take the only individual, namely, our chooser Deb. From the second decision, take Ada, for instance. Now take a predicament that Deb might be in: that is, a utility that $o^*_1$ will give her and a utility $o^*_2$ will give her; and suppose she'll get that in state $s_1$ if she's Ada, and $s_2$ if she's Cam. So the probability she's in that predicament is $p_1/3 + p_2/3$. But then Ada faces that predicament in $s^{ABC}_1$ and $s^{ACB}_1$, since Deb faces it as Ada in state $s_1$, and Ada faces that predicament in $s^{CAB}_2$ and $s^{CBA}_2$, since Deb faces it as Cam in state $s_2$. So the probability she's in that predicament is $p_1/6 + p_1/6 + p_2/6 + p_2/6$. And so the probability that Deb faces that predicament is the same as the probability Ada does. And so on for other individuals and other predicaments. And so personwise invariance secures the conclusion.

Stringing these three steps together, we get the conclusion that $o_i$ is permissible iff $o'_i$ is permissible iff $o''_i$ is permissible iff $o^*_i$ is permissible. That is, what it is permissible to choose in the social choice case is precisely what it's permissible to in the corresponding individual choice case in which you are behind the veil of ignorance, completely uncertain of who you are.

Of course, the invariance principles will not be to everyone's taste. For instance, consider the following two social choice problems. In both the population comprises Ada and Bab. In the first, the pay-off matrix is this:
$$\begin{array}{r|cc} & s_1 & s_2 \\ \hline o_1 & (4, 4) & (4, 4) \\ o_2 & (6, 3) & (6, 3)\end{array}$$In the second, it is this:
$$\begin{array}{r|cc} & s_1 & s_2 \\ \hline o'_1 & (4, 4) & (4, 4) \\ o'_2 & (6, 3) & (3, 6)\end{array}$$I think it's plausible that we would prefer $o_1$ to $o_2$ and $o'_2$ to $o'_1$, which goes against the requirement of statewise invariance, since, for each state and each predicament, the number of people who face that predicament in that state is the same. We prefer $o_1$ to $o_2$ because, while $o_2$ gives greater total welfare, it gives Bab no chance of being the better off person, and the increase in total welfare isn't sufficient to compensate for that unfairness. And we prefer $o'_2$ to $o'_1$ because it gives greater total welfare, and it gives both a chance of being the better off person. Of course, equality considerations might tell in favour of the first option in both choice problems, but we might assume that $3$ is a pretty high level of welfare and some modest inequality with that as the minimum level of welfare is acceptable.

Comments

Post a Comment