### Should we agree? II: a new pragmatic argument for consensus

In the previous post, I introduced the norm of Consensus. This is a claim about the rationality of groups. Suppose you've got a group of individuals. For each individual, call the set of propositions to which they assign a credence their agenda. They might all have quite different agendas, some of them might overlap, others might not. We might say that the credal states of these individual members cohere with one another if there is a some probability function that is defined for any proposition that appears in any member's agenda, and the credences each member assigns to the propositions in their agenda match those assigned by this probability function to those propositions. Then Consensus says that a group is irrational if it does not cohere. A group coming to consensus
In that post, I noted that there are two sorts of argument for this norm: a pragmatic argument and an epistemic argument. The pragmatic argument is a sure loss argument. It is based on the fact that, if the individuals in the group don't agree, there is a series of bets that their credences require them to accept that will, when taken together, lose the group money for sure. In this post, I want to argue that there is a problem with the sure loss argument for Consensus. It isn't peculiar to this argument, and indeed applies equally to any argument that tries to establish a rational requirement by showing that someone who violates it is exploitable. Indeed, I've raised it elsewhere against the sure loss argument for Probabilism (Section 6.2, Pettigrew 2020) and the money pump argument against non-exponential discounting and changing preferences in general (Section 13.7.4, Pettigrew 2019). I'll describe the argument here, and then offer a solution based on work by Mark Schervish (1989) and Ben Levinstein (2017). I've described this sort of solution before (Section 6.3, Pettigrew 2020), and Jason Konek (ta) has recently put it to interesting work addressing an issue with Julia Staffel's (2020) account of degrees of incoherence.

Sure loss and money pump arguments judge the rationality of attitudes, whether credences or preferences, by looking at the quality of the choices they require us to make. As Bishop Butler said, probability is the very guide of life. These arguments evaluate credences by exactly how well they provide that guide. So they are teleological arguments: they attempt to derive facts about the epistemic right---namely, what is rationally permissible---from facts about the epistemic good---namely, leading to pragmatically good choices.

Say that one sequence of choices dominates another if, taken together, the first leads to better outcomes for sure. Say that a collection of attitudes is exploitable if there is a sequence of decision problems you might face such that, if faced with them, these attitudes will require you to make a dominated sequence of choices.

For instance, take the sure loss argument for Probabilism: if you violate Probabilism because you believe $A\ \&\ B$ more strongly than you believe $A$, your credence in the former will require you to pay some amount of money for a bet that pays out a pound if $A\ \&\ B$ true and nothing if it's false, and your credence in the latter will require you to sell for less money a bet that pays out a pound if $A$ is true and nothing if it's false; yet you'd be better off for sure rejecting both bets. So rejecting both bets dominates accepting both; your credences require you to accept both; so your credences are exploitable. Or take the money pump argument against cyclical preferences: if you prefer $A$ to $B$ and $B$ to $C$ and $C$ to $A$, then you'll choose $B$ when offered a choice between $B$ and $C$, you'll then pay some amount to swap to $A$, and you'll then pay some further amount to swap to $C$; yet you'd be better off for sure simply choosing $C$ in the first place and not swapping either time that possibility was offered. So choosing $C$ and sticking with it dominates the sequence of choices your preferences require; so your preferences are exploitable.

But, I contend, the existence of a sequence of decision problems in response to which your attitudes require you to make a dominated series of choices does not on its own render those attitudes irrational. After all, it is just one possible sequence of decision problems you might face. And there are many other sequences you might face instead. The argument does not consider how your attitudes will require you to choose when faced with those alternative sequences, and yet surely that is relevant to assessing those attitudes, for it might be that however bad is the dominated sequences of choices the attitudes require you to make when faced with the sequence of decision problems described in the argument for exploitability, there is another sequence of decision problems where those same attitudes require you to make a series of choices that are very good; indeed, they might be so good that they somehow outweigh the badness of the dominated sequence. So, instead of judging your attitudes by looking only at the outcome of choosing in line with them when faced with a single sequence of decision problems, we should rather judge them by looking at the outcome of choosing in line with them when faced with any decision problem that might come your way, weighting each by how likely you are to face it, to give a balanced view of the pragmatic benefits of having those credences. That's the approach I'll present now, and I'll show that it leads to a new and better pragmatic argument for Probabilism and Consensus.

As I presented them, the sure loss arguments for Probabilism and Consensus both begin with a principle that I called Ramsey's Thesis. This is a claim about the prices that an individual's credence in a proposition requires her to pay for a bet on that proposition. It says that, if $p$ is your credence in $A$ and $x < pS$, then you are required to pay $£x$ for a bet that pays out $£S$ if $A$ is true and $£0$ if $A$ is false. Now in fact this is a particular consequence of a more general norm about how our credences require us to choose. Let's call the more general norm Extended Ramsey's Thesis. It says how our credence in a proposition requires us to choose when faced with a series of options, all of whose payoffs depend only on the truth or falsity of that proposition. Given a proposition $A$, let's say that an option is an $A$-option if its payoffs at any two worlds at which $A$ is true are the same, and its payoffs at any two worlds at which $A$ is false are the same. Then, given a credence $p$ in $A$ and an $A$-option $a$, we say that the expected payoff of $a$ by the lights of $p$ is
$$p \times \text{payoff of a when A is true} + (1-p) \times \text{payoff of a when A is false}$$Now suppose you face a decision problem in which all of the available options are $A$-options. Then Extended Ramsey's Thesis says that you are required to pick an option whose expected payoff by the lights of your credence in $A$ is maximal.*

Next, we make a move that is reminiscent of the central move in I. J. Good's argument for Carnap's Principle of Total Evidence (Good 1967). We say what we take the payoff to be of having a particular credence in a particular proposition given a particular way the world is and when faced with a particular decision problem. Specifically, we define the payoff of having credence $p$ in the proposition $A$ when that proposition is true, and when you're faced with a decision problem $D$ in which all of the options are $A$-options, to be the payoff when $A$ is true of whichever $A$-option available in $D$ maximises expected payoff by the lights of $p$. And we define the payoff of having credence $p$ in the proposition $A$ when that proposition is false, and when you're faced with a decision problem $D$ in which all of the options are $A$-options, to be the payoff when $A$ is false of whichever $A$-option available in $D$ maximises expected payoff by the lights of $p$. So the payoff of having a credence is the payoff of the option you're required to pick using that credence.

Finally, we make the move that is central to Schervish's and Levinstein's work. We now know the payoff of having a particular credence in propositiojn $A$ when you face a decision problem in which all options are $A$-options. But of course we don't know which such decision problems we'll face. So, when we evaluate the payoff of having a credence in $A$ when $A$ is true, for instance, we look at all the decision problems populated by $A$-options we might face and weight them by how likely we are to face them and then take the payoff of having that credence when $A$ is true to be the expected payoff of the $A$-options it would leave us to choose faced with the decision problems we'll face. And then we note, as Schervish and Levinstein themselves note: if we make certain natural assumptions about how likely we are to face different decisions, then this resulting measure of the pragmatic payoff of having credence $p$ in proposition $A$ is a continuous and strictly proper scoring rule. That is, mathematically, the functions we use to measure the pragmatic value of a credence function are identical to the functions we use to evaluate the epistemic value of a credence that we use in the epistemic utility argument for Probabilism and Consensus.**

With this construction in place, we can piggyback on the theorems stated in the previous post to give new pragmatic arguments for Probabilism and Consensus. First: Suppose your credences do not obey Probabilism. Then there are alternative ones you might have instead that do obey that norm and, at any world, if we look at each decision problem you might face and ask what payoff you'd receive at that world were you to choose from the options in that decision problem as the two different sets of credences require, and then weight those payoffs by how likely they are to face that decision to give their expected payoff, then the alternatives will always have the greater expected payoff. This gives strong reason to obey Probabilism.

Second: Take a group of individuals. Now suppose the group's credences do not obey Consensus. Then there are alternative credences each member might have instead such that, if they were to have them, the group would obey Consensus and, at any world, if we look at each decision problem each member might face and ask what payoff that individual would receive at that world were they to choose from the options in that decision problem as the two different sets of credences require, and then weight those payoffs by how likely they are to face that decision to give their expected payoff, then the alternatives will always have the greater expected total payoff when this is summed across the whole group.

So that is our new and better pragmatic argument for Consensus. The sure loss argument points out a single downside to a group that violates the norm. Such a group is vulnerable to exploitation. But it remains silent on whether there are upsides that might balance out that downside. The present argument addresses that problem. It finds that, if a group violates the norm, there are alternative credences they might have that are guaranteed to serve them better in expectation as a basis for decision making.

* Notice that, if $x < pS$, then the expected payoff of a bet that pays $S$ if $A$ is true and $0$ if $A$ is false is
$$p(-x + S) + (1-p)(-x) = pS- x$$
which is positive. So, if the two options are accept or reject the bet, accepting maximises expected payoff by the lights of $p$, and so it is required, as Ramsey's Thesis says.

** Konek (ta) gives a clear formal treatment of this solution. For those who want the technical details, I'd recommend the Appendix of that paper. I think he presents it better than I did in (Pettigrew 2020).

## References

Good, I. J. (1967). On the Principle of Total Evidence. The British Journal for the Philosophy of Science, 17, 319–322.

Konek, J. (ta). Degrees of incoherence, dutch bookability & guidance value. Philosophical Studies.

Levinstein, B. A. (2017). A Pragmatist’s Guide to Epistemic Utility. Philosophy of Science, 84(4), 613–638.

Pettigrew, R. (2019). Choosing for Changing Selves. Oxford, UK: Oxford University Press.

Pettigrew, R. (2020). Dutch Book Arguments. Elements in Decision Theory and Philosophy. Cambridge, UK: Cambridge University Press.

Schervish, M. J. (1989). A general method for comparing probability assessors. The Annals of Statistics, 17, 1856–1879.

Staffel, J. (2020). Unsettled Thoughts. Oxford University Press.