In the previous two posts in this series (here and here), I described two arguments for the conclusion that the members of a group should agree. One was an epistemic argument and one a pragmatic argument. Suppose you have a group of individuals. Given an individual, we call the set of propositions to which they assign a credence their agenda. The group's agenda is the union of its member's agendas; that is, it includes any proposition to which some member of the group assigns a credence. The precise conclusion of the two arguments we describe is this: the group is irrational if there no single probability function defined on the group's agenda that gives the credences of each member of the group when restricted to their agenda. Following Matt Kopec, I called this norm Consensus.
|Cats showing a frankly concerning degree of consensus|
Both arguments use the same piece of mathematics, but they interpret it differently. Both appeal to mathematical functions that measure how well our credences achieve the goals that we have when we set them. There are (at least) two such goals: we aim to have credences that will guide our actions well and we aim to have credences that will represent the world accurately. In the pragmatic argument, the mathematical function measures how well our credences achieve the first goal. In particular, they measure the utility we can expect to gain by having the credences we have and choosing in line with them when faced with whatever decision problems life throws at us. In the epistemic argument, the mathematical function measures how well our credences achieve the second goal. In particular, they measure the accuracy of our credences. As we noted in the second post on this, work by Mark Schervish and Ben Levinstein shows that the functions that measure these goals have the same properties: they are both strictly proper scoring rules. The arguments then appeal to the following fact: given a strictly proper scoring rule, if the members of a group do not agree on the credences they assign in the way required by Consensus, then there are some alternative credences they might assign instead that are guaranteed to be better according to that scoring rule.
I'd like to turn now to assessing these arguments. My first question is this: In the norm of Probabilism, rationality requires something of an individual, but in the norm of Consensus, rationality requires something of a group of individuals. We understand what it means to say that an individual is irrational, but what could it mean to say that a group is irrational?
Here, I follow Kenny Easwaran's suggestion that collective entities---in his case, cities; in my case, groups---can be said quite literally to be rational or irrational. For Easwaran, a city is rational "to the extent that the collective practices of its people enable diverse inhabitants to simultaneously live the kinds of lives they are each trying to live." As I interpret him, the idea is this: a city, no less than its individual inhabitants, has an end or goal or telos. For Easwaran, for instance, the end of a city is enabling its inhabitants to live as they wish to. And a city is irrational if it does not provide---in its physical and technological infrastructure, its byelaws and governing institutions---the best means to that end among those that are available. Now, we might disagree with Easwaran's account of a city's ends. But the template he provides by which we might understand group rationality is nonetheless helpful. Following his lead, we might say that a group, no less than its individual members, has an end. For instance, its end might be maximising the total utility of its members, or it might be maximizing the total epistemic value of their credences. And it is then irrational if it does not provide the best means to that end among those available. So, for instance, as long as agreement between members is available, our pragmatic and epistemic arguments for Consensus seem to show that a group whose ends are as I just described does not provide the best means to its ends if it does not deliver such agreement.
Understanding group rationality as Easwaran does helps considerably. As well as making sense of the claim that the group itself can be assessed for rationality, it also helps us circumscribe the scope of the two arguments we've been exploring, and so the scope of the version of Consensus that they justify. After all, it's clear on this conception that these arguments will only justify Consensus for a group if
- that group has the end of maximising total expected pragmatic utility or total epistemic utility, i.e., maximising the quantities measured by the mathematical functions described above;
- there are means available to it to achieve Consensus.
So, for instance, a group of sworn enemies hellbent of thwarting each other's plans is unlikely to have as its end maximising total utility, while a group composed of randomly selected individuals from across the globe is unlikely to have as its end maximising total epistemic utility, and indeed a group so disparate might lack any ends at all.
And we can easily imagine situations in which there are no available means by which the group could achieve Consensus, perhaps because it would be impossible to set up reliable lines of communication.
This allows us to make sense of two of the conditions that Donald Gillies places on the groups to which he takes his sure loss argument to apply (this is the first version of the pragmatic argument for Consensus; the one I presented in the first post and then abandoned in favour of the second version in the second post). He says (i) the members of the group must have a shared purpose, and (ii) there must be good lines of communication between them. Let me take these in turn to understand their status more precisely.
It's natural to think that, if a group has a shared purpose, it will have as its end maximising the total utility of the members of the group. And indeed in some cases this is almost certainly true. Suppose, for instance, that every member of a group cares only about the amount of biodiversity in a particular ecosystem that is close to their hearts. Then they will have the same utility function, and it is natural to say that maximising that shared utility is the group's end. But of course maximising that shared utility is equivalent to maximising the group's total utility, since the total utility is simply the shared utility scaled up by the number of members of the group.
However, it is also possible for a group to have a shared purpose without its end being to maximise total utility. After all, a group can have a shared purpose without each member taking that purpose to be the one and only valuable end. Imagine a different group: each member cares primarily about the level of biodiversity in their preferred area, but each also cares deeply about the welfare of their family. In this case, you might take the group's end to be maximising biodiversity in the area in question, particularly if it was this shared interest that brought them together as a group in the first place, but maximising this good might require the group not to maximise total utility, perhaps because some members of the group have family who are farmers and who will be adversely affected by whatever is the best means to the end of greater biodiversity.
What's more, it's possible for a group to have as its end maximising total utility without having any shared purpose at all. For instance, a certain sort of utilitarian might say that the group of all sentient beings has as its end the maximisation of the total utility of its members. But that group does not have any shared purpose.
So I think we can use the pragmatic and epistemic arguments to determine the groups to which the norm of Consensus applies, or at least the groups for which our pragmatic and epistemic arguments can justify its application. It is those groups that have as their end either maximising the total pragmatic utility of the group, or maximising their total epistemic utility, or maximising some weighted average of the two---after all, the weighted average of two strictly proper scoring rules, one measuring epistemic utility and one measuring pragmatic utility, is itself a strictly proper scoring rule. Of course, this requires an account of when a group has a particular end. This, like all questions about when collectives have certain attitudes, is delicate. I won't say anything more about it here.
Let's turn next to Gillies' claim that Consensus applies only to groups between whose members there are reliable lines of communication. In fact, I think our versions of the arguments show that this condition lives a strange double life. On the one hand, if such lines of communcation are necessary to achieve agreement across the group, then the norm of Consensus simply does not apply to a group when these lines of communication are impossible, perhaps because of geographical, social, or technological barriers. A group cannot be judged irrational for failing to achieve something it could not possibly achieve, however much closer it would get to its goal if it could achieve that.
On the other hand, if such lines of communication are available, and if they increase the chance of agreement among members of the group, then our two arguments for Consensus are equally arguments for establishing such lines of communication, providing that the cost of doing so is outweighed by the gain in pragmatic or epistemic utility that comes from achieving agreement.
But these arguments do something else as well. They lend nuance to Consensus. In some cases in which some lines of communication are available but others aren't, or are too costly, our arguments still provide norms. Take, for instance, a case in which some central planner is able to communicate a single set of prior credences that each member of the group should have, but after the members start receiving evidence, this central planner can no longer coordinate their credences. And suppose we know that the members will receive different evidence: they'll be situated in different places, and so they'll see different things, have access to different information sources, and so on. So we know that, if they update on the evidence they receive in the standard way, they'll end up having different credences from one another and therefore violating Consensus. You might think, from looking at Consensus, that the group would do better, both pragmatically and epistemically, if each of its members were to ignore whatever evidence were to come in and to stick with their prior regardless in order to be sure that they remain in agreement and satisfy Consensus both in their priors and their posteriors.
In fact, however, this isn't the case. Let's take an extremely simple example. The group has just two members, Ada and Baz. Each has opinions only about the outcomes of two independent tosses of a fair coin. So the possible worlds are HH, HT, TH, TT. Ada will learn the outcome of the first, and Baz will learn the outcome of the second. A central planner can communicate to them a prior they should adopt, but that central planner can't receive information from them, and so can't receive their evidence and pool it and communicate a shared posterior to them. How should Ada and Baz proceed? How should they pick their priors, and what strategies should each adopt for updating when the evidence comes in? The entity we're assessing for rationality is the quadruple that contains Ada's prior together with her plan for updating, and Baz's prior together with his plan for updating. Which of these are available? Well, nothing constrains Ada's priors and nothing constrain's Baz's. But there are constraints on their updating rules. Ada's updating rule must give the same recommendation at any two worlds at which her evidence is the same---so, for instance, it must give the same recommendation at HH as at HT, since all she learns at both is that the first coin landed heads. And Baz's updating rule must give the same recommendation at any two worlds at which his evidence is the same---so, for instance, it must give the same recommendation at HH as at TH. Then consider the following norm:
Prior Consensus Ada and Baz should have the same prior and both should plan to update on their private evidence by conditioning on it.
And the argument for this is that, if they don't, there's a quadruple of their priors and plans that (i) satisfy the constraint outlined above and (ii) together have greater total epistemic utility at each possible world; and there's a quadruple of their priors and plans that (i) satisfy the constraint outlined above and (ii) together have greater total expected pragmatic utility at each possible world. This is a corollary of an argument that Ray Briggs and I gave, and that Michael Nielsen corrected and improved on. So, if Ada and Baz are in agreement on their prior, and plan to stick with it rather than update on their evidence because that way they'll retain agreement, then they're be accuracy dominated and pragmatically dominated.
You might wonder how this is possible. After all, whatever evidence Ada and Baz each receive, Prior Consensus requires them to update on it in a way that leads them to disagree, and we know that they are then accuracy and pragmatically dominated. This is true, and it would tell against the priors + updating plans recommended by Prior Consensus if there were some way for Ada and Baz to communicate after their evidence came in. It's true that, for each possible world, there is some credence function such that if, at each world, Ada and Baz were to have that credence function rather than the ones they obtain by updating their shared prior on their private evidence, then they'd end up with greater total accuracy and pragmatic utility. But, without the lines of communication, they can't have that.
So, by looking in some detail at the arguments for Consensus, we come to understand better the groups to which it applies and the norms that apply to those groups to which it doesn't apply in its full force.