On contractualism, reasonable compromise, and the source of priority for the worst-off

Different policies introduced by a social planner, whether the government of a country or the head of an institution, lead to situations in which different peoples' lives go better or worse. That is, in the jargon of this area, they lead to different distributions of welfare across the individuals they affect. If we allow the unfettered accumulation of private wealth, that will lead to one distribution of welfare across the people in the country where the policy is adopted; if we cap such wealth, tax it progressively, or prohibit it altogether, those policies will lead to different distributions. The question I want to think about in this post is a central question of social choice theory: How should we choose between such policies? Again using the jargon of the area, I want to sketch a particular sort of social contract argument for a version of the prioritarian's answer to this question, and to show that this answer avoids an objection to the standard version of prioritarianism raised by Alex Voorhoeve and Mike Otsuka. But for those unfamiliar with this jargon, all will hopefully become clear.

Disclaimer: I've only newly arrived in ethics and social choice theory, so while I've tried to find versions of this argument and failed, it's quite possible, indeed quite likely, that it already exists. Part of my hope in writing this post is that someone points me towards it!

A cat with whom there is no reasonable compromise

1. Two approaches to social planning: axiological and contractual

There are two sorts of situation in which the social planner might find themselves: in the first, they are certain of the consequences of the policies they might adopt; on the second, they are not. Throughout the post, I'll assume there are just two people in the population that the social planner's choices will affect, Ada and Bab, and I'll write $(u, v)$ for the welfare distribution in which Ada has welfare level $u$ and Bab has $v$. The following table represents a situation in which the social planner knows the world is in state $s$, and they have two options, $o_1$ and $o_2$, where the first gives Ada $1$ unit of welfare and Bab $9$, while the second option gives Ada and Bab $4$ units each. $$\begin{array}{r|c} & s \\ \hline o_1 & (1, 9) \\ o_2 & (4,4)\end{array}$$And this table represents a situation in which the social planner is uncertain whether the world is in state $s_1$ or $s_2$, and the welfare distributions are as indicated: $$\begin{array}{r|cc} & s_1 & s_2 \\ \hline  o_1 & (2, 10) & (7,7) \\ o_2 & (4,4) & (1,20)\end{array}$$

There are (at least) two ways to approach the social planner's choice: axiological and contractual.

An axiologist provides a recipe that takes a distribution of welfare at a state of the world, such as $(1, 9)$, and aggregates it in some way to give a measure of the social welfare of that distribution, which we might think of as the group's welfare given that option at that state of the world. If there is no uncertainty, then the social planner ranks the options by their social welfare; if there is uncertainty, then the social planner uses standard decision theory to choose, using the social welfare levels as the utilities---so, for instance, they might choose by maximizing expected social welfare. 

Average and total utilitarians are axiologists; so are average and total prioritarians.

The average utilitarian takes the social welfare to be the average welfare, so that the social welfare of the distribution $(1, 9)$ is $\frac{1+9}{2} = 5$;

The total utilitarian takes it to be the total, i.e., $1+9=10$.

The average prioritarian takes each level of welfare in the distribution, transforms it by applying a concave function, and then takes the average of these transformed welfare levels. The idea is that, like utilitarianism, increasing an individual's welfare while keeping everything else fixed should increase social welfare, but, unlike utilitarianism, increasing a worse-off person's welfare by a given amount should increase social welfare more than increasing a better-off person's welfare by the same amount; put another way, increasing an individual's welfare should have diminishing marginal moral value, just as we sometimes say that increasing an individual's monetary wealth has diminishing marginal welfare value. So, for instance, if they use the logarithmic function $\log(x)$ to transform the levels of welfare, the social welfare of $(1, 9)$ is $\frac{\log(1) + \log(9)}{2} \approx 1.099$. Notice that, if I increase Bab's 9 units of welfare to 10 and keep Ada's fixed at 1, the average prioritarian value goes from $\frac{\log(1) + \log(9)}{2} \approx 1.099$ to $\frac{\log(1) + \log(10)}{2} \approx 1.151$, whereas if I increase Ada's 1 units to 2 and leave Ada's fixed at 9, the total value goes from $\frac{\log(1) + \log(9)}{2} \approx 1.099$ to $\frac{\log(2) + \log(9)}{2} \approx 1.445$.

The total prioritarian takes the social welfare to be the total of these transformed welfare levels. So, if our concave function is the logarithmic function, the social welfare of $(1,9)$ is $\log(1) + \log(9) \approx 1.099$.

The second approach to the social planner's choice appeals to social contracts. For instance, Harsanyi, Rawls, and Buchak all think the social planner should choose as if they are a member of the society for whom they are choosing, and should do so with complete ignorance of whom, within that society, they are. These theorists differ only in what decision rule is appropriate behind such a veil of ignorance. Others think you should choose a policy that can be justified to each member of the society, where what that entails can be spelled out in a number of ways, such as minimizing the worst legitimate complaints members of the affected population might make against your decision, or minimizing the total legitimate complaints they might make, and where there are different ways to measure the legitimate complaints an individual might make.

2. The Voorhoeve-Otsuka Objection: social planning for individuals

One of the purposes of this blogpost is to bring the axiologists and contractualists together by showing that a certain version of contracturalism leads to a certain axiological approach that resembles prioritarianism, but avoids an objection that has troubled that position. Let me spell out that objection, which was raised originally by Alex Voorhoeve and Mike Otsuka. In it, they ask us to imagine that the social planner is choosing for a population that contains just a single person, Cal. For the sake of concreteness, let's say they face the following choice: $$\begin{array}{r|cc} & 50\% & 50\% \\ & s_1 & s_2 \\ \hline  o_1 & (2) & (3) \\ o_2 & (1) & (5)\end{array}$$Then prioritarianism tells the social planner to maximize expected social welfare: for $o_1$, this is $\frac{1}{2}\times \log(2) + \frac{1}{2} \times \log(3) \approx 0.896$; for $o_2$, it is $\frac{1}{2}\times \log(1) + \frac{1}{2} \times \log(5) \approx 0.804$. But standard decision theory says that Cal themselves should maximize their expected welfare: for $o_1$, this is $\frac{1}{2} \times 2 + \frac{1}{2} \times 3 = 2.5$; for $o_2$, it is $\frac{1}{2} \times 1 + \frac{1}{2} \times 5 = 3$. So the social planner will choose $o_1$, while Cal will choose $o_2$. That is, according to the prioritarian, morality requires the social planner to choose against Cal's wishes. But, Voorhoeve and Otsuka contend, that can't be right.

3. Justifying compromises to each

Finally, then, let me spell out my argument. It begins with a version of the social contract approach on which the social planner must be able to justify their choice to each person affected. When a policy affects different people differently, you can't reasonably expect that the social planner will make the choice by maximizing your own personal expected welfare. You must realise that you'll have to tolerate some degree of compromise with the welfare functions of others who are affected. So we seek a measure of social welfare that the social planner can use in her decision-making that effects a compromise between the welfare functions of the various individuals.

But of course compromises can be more or less reasonable; and some compromises an individual might reasonably reject. For instance, take the welfare distribution $(2, 7)$, and suppose I proffer a social welfare function that assigns this a social welfare of $10$. This is not reasonable! Why not? Well, a natural thing to say is that, while a compromise between two competing welfare functions will inevitably lie some distance from at least one of them, a compromise is unreasonable if it lies further than necessary from both, and this one does. After all, consider an alternative social welfare function that assigns a social welfare of $7$ instead of $10$. Then this lies closer to Ada's individual welfare, which is $2$, and to Bab's, which is $7$.

This suggests that one way to justify a compromise to each person affected is to show that the welfare function used to make a decision does not lie unreasonably far from the individual welfare functions. There are many ways to spell this out, but let me describe just two---if this project has any mileage, the major work will be in saying why one of these is the right way to go.

So what we need first is a measure of how far an individual's welfare lies from the social welfare. For the moment, I won't say what this is, but I'll present two alternatives below. Given such a measure, we might select our compromise social welfare to be the one that minimizes the total distance to the individual welfares. This seems like a compromise we could easily justify to all affected parties. "We took you all into account," we might say, "and each equally. As you'll understand," we might continue, "the resulting social welfare had to lie some distance from at least some of your individual welfares. But we've minimized the total distance summed over all of you."

So suppose that works; suppose that is sufficient to justify to each person in the population the social welfare function we'll use to choose between policies. Which measure of distance from a candidate social welfare to an individual welfare should we use when we sum up those distances and pick a candidate in a way that minimizes that sum? Again, the choice here requires some justification, but let me describe two that are popular in a range of contexts in which we must measure how far one number lies from another. There are a bunch of results that characterize each as the unique function with certain apparently desirable properties, but let's leave the question of picking between them aside and see what they say:

The first is known as squared Euclidean distance (SED):$$\mathrm{SED}(a, b) = |a-b|^2.$$ So the distance from $a$ to $b$ is just the square of the difference between them.

The second is known as generalized Kullback-Leibler divergence (GKL): $$\mathrm{GKL}(a, b) = a\log \left ( \frac{a}{b} \right ) - a + b.$$

Both SED and GKL are divergences: that is, the distance from one number to another is always non-negative; it is zero if both numbers are the same; it is positive otherwise.

So now suppose $(u, v)$ is the welfare distribution over Ada and Bab. For each of these two measures of distance, which is the social welfare that minimizes total distance to the individual welfares?

For SED, it is the arithmetic mean of $u$ and $v$, that is, $\frac{u+v}{2}$. That is, $\mathrm{SED}(x, u) + \mathrm{SED}(x, v)$ is minimized, as a function of $x$, at $x = \frac{u+v}{2}$. And, in general, $\mathrm{SED}(x, u_1) + \ldots + \mathrm{SED}(x, u_n)$ is minimized, as a function of $x$, at $x = \frac{u_1 \times \ldots \times u_n}{n}$.

For GKL, it is the geometric mean of $u$ and $v$, that is, $\sqrt{uv}$. That is, $\mathrm{GKL}(x, u) + \mathrm{GKL}(x, v)$ is minimized, as a function of $x$, at $x = \sqrt{uv}$. And, in general, $\mathrm{GKL}(x, u_1) + \ldots + \mathrm{GKL}(x, u_n)$ is minimized, as a function of $x$, at $x = \sqrt[n]{u_1 \times \ldots \times u_n}$.

So, if we use SED, we recover average utilitarianism, while if we use GKL, we introduce a new(-ish) way to form social welfare functions from individual ones. I'll call this new(-ish) way geometric compromise.

Let's see average utilitarianism and geometric compromise at work in the case of choice under certainty from the introduction: $$\begin{array}{r|c} & s \\ \hline o_1 & (1, 9) \\ o_2 & (4,4)\end{array}$$The average utilitarian takes the social welfare of $(1, 9)$ to be $5$, and the social welfare of $(4, 4)$ to be $4$, while the geometric compromiser takes the social welfare of $(1, 9)$ to be $\sqrt{1\times 9} = 3$, and the social welfare of $(4, 4)$ to be $4$. So, while the average utilitarian will choose $o_1$, the geometric compromiser will $o_2$, just as the prioritarian will.

In fact, this agreement between geometric compromiser and prioritarian is no coincidence. In situations in which the welfare distribution delivered by the different options is known, and when the prioritarian transforms individual welfares by taking their logarithm before summing them to give the social welfare, these will always agree. That's because the geometric mean of a sequence of numbers is a strictly increasing function of the average of the logarithms of those numbers: in symbols,$$\sqrt{u \times v} = e^{\frac{\log(u) + \log(v)}{2}},$$ and $e^x$ is a strictly increasing function of $x$. 

However, geometric compromise and prioritarianism can come apart when there is uncertainty about the outcome of a policy. That is because the strictly increasing function that transforms the average prioritarian's account of social welfare into the geometric compromise is not a linear one.

But a significant advantage of geometric compromise over prioritarianism is that, when there is only a single person affected by a policy, the social welfare function coincides with that individual's welfare function. After all, our contractualist approach says that you should take the social welfare of a distribution to be the value such that total distance from that value to the welfare levels of the individuals is minimized. When there is just one individual, the value that minimizes this is simply that individual's welfare level, since GKL is a divergence, as mentioned above. So, when deciding under uncertainty on behalf of just one person, the social planner will choose by maximizing expected social welfare, which is just maximizing expected individual welfare, and there will be no tension between what the social planner chooses on the individual's behalf and what the individual would have chosen on their own behalf. We thereby avoid the objection from Voorhoeve and Otsuka.

4. Properties of Geometric Compromise

What properties does geometric compromise have, and how do they compare with the properties that utiliarianism and prioritarianism have? 

Welfarism: like utilitarianism, prioritarianism, and egalitarianism, according to geometric compromise, when there is no uncertainty about the outcomes of different policies, the social planner's ranking of those policies depends only on the welfare distributions to which they give rise.

Anonymity: like utilitarianism, prioritarianism, and egalitarianism, according to geometric compromise, if one welfare distribution is obtained from another by changing only the identity of the individuals who receive the different levels of welfare, then both distributions have the same social welfare. That is because $\sqrt{u \times v} = \sqrt{v \times u}$.

Pigou-Dalton: like prioritarianism and egalitarianism, but unlike utilitarianism, according to geometric comprise, if one welfare distribution is obtained from another by taking a particular amount of welfare from a better-off individual and giving it to a worse-off individual in a way that leaves the latter worse-off, then the latter distribution has higher social welfare. That is because, if $\varepsilon > 0$ and $u + \varepsilon < v - \varepsilon$, then $\sqrt{u \times v} < \sqrt{(u+\varepsilon) \times (v - \varepsilon)}$.

Person Separability: like prioritarianism and utilitarianism, but unlike egalitarianism, according to geometric compromise, the order of the social welfare of two distributions depends only on the welfare of the individuals who have different welfare in those two distributions. That is because $\sqrt{u \times v} \leq \sqrt{u \times v'}$ iff $\sqrt{v} \leq \sqrt{v'}$.

5. Generalizing geometric compromise

Prioritarianism says that, to obtain the social welfare of a distribution, you should take each individual's welfare, transform it by a concave function, and add up the transformations. As we've seen, if the concave function is the logarithmic function this orders distributions exactly as geometric compromise orders them; and geometric compromise is the compromise that minimizes total distance to the individual welfares when that distance is given by GKL. But suppose the concave function isn't the logarithmic function but something else---let's call it $f$. Is there a measure of distance such that minimizing total distance to the individual welfares using that gives a compromise social welfare that agrees with prioritarianism-with-$f$ on the ordering of distributions, and again avoids the Voorhoeve-Otsuka objection? Happy news: there is! What follows in this section gets a little technical, so do feel free to skip---it's just an explicit construction of the measure of distance we need.

There is a reasonably well-studied class of functions knowns as the Bregman divergences, which we can use to measure the distance from one number to another. They all have the following form: take a strictly convex, differentiable function $\varphi$ and define $d_\varphi$ as follows:$$d_\varphi(x, y) = \varphi(x) - \varphi(y) - \varphi'(y)(x-y).$$Now, let $F$ be an anti-derivative of $f$---that is, $F' = f$. Then, if $f$ is differentiable, $F$ is strictly convex and differentiable, since $F'' = f'$, and $f$ is strictly increasing, so $f' > 0$, so $F''>0$, so $F$ is strictly convex. Then take the measure of distance used to assess compromises to be $d_F$. Then $d_F(x, u_1) + \ldots + d_F(x, u_n)$ is minimized, as a function of $x$, at $$F^{-1}\left (\frac{F(u_1) + \ldots + F(u_n)}{n} \right ).$$ And $F^{-1}$ is strictly increasing. So, the social welfare of $(u, v)$ according to average-prioritarianism-with-$f$ is greater than the social welfare of $(u',v')$ according to average-prioritarianism-with-$f$ iff the social welfare of $(u, v)$ by the compromise social welfare produced using $d_F$ is greater than the social welfare assigned to $(u', v')$ by the compromise social welfare produced using $d_F$.

What's more, if there is just one individual, the social welfare of the distribution $(u)$ is $F^{-1}(F(u)) = u$, which is just the individual's welfare. So, again, we avoid Voorhoeve and Otsuka's objection.

6. Problems

Let me round this off by mentioning two closely related issues with the proposal. First, geometric compromise isn't defined when the utilities are negative; second, the orderings to which it gives rise aren't invariant under positive linear transformation of the welfare values. Let's start with the second. According to many accounts of how we measure welfare numerically, if one set of numbers adequately represents welfare levels, then so does any positive linear transformation of it: that is, you can multiply the numbers by a positive constant and add or subtract any constant, and you'll end up with a different but equally adequate representation of the levels of welfare. This is sometimes put by saying that there is no privileged zero or unit for welfare. However, while the ordering placed on welfare distributions by geometric compromise is invariant under multiplication by a positive constant (since $\sqrt{ku \times kv} = k\sqrt{u\times v}$), it is not invariant under addition or subtraction of a constant. This means that, in order for our proposal to make sense, there must be a privileged zero for welfare. However, that doesn't seem so implausible. After all, we might just take the minimum welfare level and let that be zero. That would also solve the first problem, since then it would make no sense to ascribe a negative welfare level and we would no longer have to worry about defining geometric compromise for such levels. I suspect this is the right way to go. I can imagine some complaining that there is no minimum welfare level, but I rather doubt that is true.

Comments

  1. Well-written information. Very impressive and to the point
    Have a look at my new blog:Vysor Pro Crack 2023

    ReplyDelete
  2. I’m excited to uncover this page. Thank you for this fantastic read!

    ReplyDelete
  3. Excellent post. I was always check this blog, and I’m impressed. Greatjob!

    ReplyDelete
  4. Wonderful items from you, man. You’re making it entertaining blog. Thanks

    ReplyDelete
  5. I enjoyed browsing this weblog posts. Hoping you write again very soon!

    ReplyDelete

Post a Comment

Popular Posts