### L. A. Paul on transformative experience and decision theory II

In the first part of this post, I considered the challenge to decision theory from what L. A. Paul calls

In this simple decision theory, we model a decision problem as follows:

$$V(A) := \sum_{S \in \mathcal{S}} p(S)U(AS)$$ Finally, we can state the main decision rule of simple decision theory:

In this statement of the decision rule, it gives a requirement of rationality. But it can also be stated as advice-giving:

Indeed, it might be worth dwelling a little on what exactly is the target of Paul's two challenges to standard decision theory. For many decision theorists, simple decision theory has two components:

However, these justificatory or reason-giving tasks are clearly important. Whether or not we do it under the banner of decision theory, the problem of saying how we ought to go about them is a problem with which we should grapple. What's more, it seems natural to say that the value function $V$, which combines a credence function and a utility function but is used in traditional decision theory merely to represent an agent's preferences, suggests a way in which an agent might determine or justify or give reasons for the preferences she has. If I ask you to justify your choice of act $A$ over act $B$, you might say that you chose $A$ instead of $B$ because you preferred $A$ to $B$. But this clearly gives rise to a further request for justification: please justify your preference for $A$ over $B$. And, to answer that, it might seem natural to say the following: you value a particular outcome to such-and-such a degree, and you value another outcome to so-and-so a degree, and so on; and you believe of one possible state of the world that it is actual to such-and-such a degree, and of another that it is actual to so-and-so a degree, and so on; and you determine the value of $A$ and of $B$ by combining these credences (degrees of belief) and utilities (degrees of value) to give the overall value of $A$ and $B$; and you use those values to determine your preferences. Thus, as well as providing a mathematical representation of consistency conditions on preferences, you might naturally think that the value function $V$ plays an important role in the justifications we give of our preferences and thus our choice behaviour. If you do, then it is to you that L. A. Paul addresses her challenges: How do you determine the value of $V$ for an act that gives rise to an epistemically transformative experience? And how do you determine the value of $V$ for an act that gives rise to a personally transformative experience? If you can't do it in either case, then our best account of how we justify choice behaviour or give reasons for our choices fails in that case.

It is worth noting that, while Paul's challenges are serious challenges to this conception of decision theory, they do not challenge the traditional conception of decision theory: that is, they do not provide counterexamples to the consistency axioms on preference orderings that traditional decision theorists takes to be the substantive content of decision theory. (Thanks to Greg Wheeler for pushing for clarity on this point in discussion of the previous post.)

Now, suppose you must choose between a range of possible actions; but you know that the utilities that you assign to the outcomes of those actions---the conjunctions $AS$ for action $A$ and state of the world $S$---will soon change. For instance, you assign higher utility to each of the outcomes of one action now than to the corresponding outcomes of another action; but you know that, in two hours, you will assign different utilities and those orderings will be reversed. Which option should you choose? Which utility function should you use to compute the value of each of the acts: your current utility function or the utility function you know you will have in two hours? As we will see below, there are cases in which it seems that you should use one and other cases in which it seems you should use the other; and there are further cases in which it seems you should consult both.

That is the general form of the problem. If you know your utilities will change, standard decision theory gives no advice how to choose which utilities to use when you decide between actions now. How does this relate to personally transformative experiences? The general case considered above does not refer to experiences that change your utilities; it merely asks what to do if you know your utilities will change---and, as we will see below, these changes need not come about as a result of personally transformative experiences. Essentially, personally transformative experiences give rise to a particular version of the general problem; and it is a particular version that affects causal decision theory: How should you choose if you know that some actions will give rise to personally transformative experiences that will change your utilities? In particular, can it ever be rational to choose to change your utilities?

I'll begin by considering the problem of utility change as it arises for simple decision theory and I'll propose a solution. Then I'll say how that solution might be adapted to the case of causal decision theory.

What are our reactions to these examples? Here are mine:

My proposal is based on the claim that there are in fact three different ways in which, at a given time, we assign value to an entire outcome---I will call these

To illustrate the idea, consider an outcome in which you attend two charity Marmite-eating events with a friend in support of a worthy cause: one occurs now; the other in 20 years. Then your three

The first thing to notice is that, when we plug this account of total utility into the definition of the value of an act, and then apply the

So much for defining the utility $U_t$ in terms of $OU_t$, $MSU_t$, and $ESU_t$. How then are we to define the value of an act at a time in terms of $U_t$? We might simply define it as above:$$V_t(A) = \sum_{S \in \mathcal{S}} p_t(S)U_t(AS)$$ However, as in the previous post, we must recognise that there might be uncertainty about the overall utility $U_t$ that arises from uncertainty about the sub-utilities that comprise it. For instance, in one outcome, I might be undergo an epistemically transformative experience at a later time. Now my current overall utility for that outcome depends on the momentary and enduring subjective sub-utilities I assign to it at later times in my life, such as the time that I have the epistemically transformative experience. Thus, if I am uncertain of what they will be, I am uncertain of my overall utility. I will solve the problem of uncertainty about utilities in a way that is slightly different from the way I solved it in the previous post. Here, I assume that we assign probabilities not just to states of the world, but to states of the world together with sub-utility profiles, which specify an objective sub-utility function for the present time, and momentary and enduring subjective sub-utility functions for each moment of time in our future life: thus, we assign probabilities to conjunctions of the form $S\ \&\ U_t = U$ where this says that state $S$ obtains and I have overall utility function $U$ at $t$. Once we do this, we can define the value of an act at $t$ as follows:$$V_t(A) = \sum_{S \in \mathcal{S}} p(S\ \&\ U_t = U)U(AS)$$

To move from the simple decision theory we've been considering so far to causal decision theory, we need make only one change: in the original formulation of simple decision theory, we said that an agent assigns probabilities at states of the world; in the previous section, we amended that so that they assign probabilities to conjunctions of states of the world together with a claim about their current overall utilities; in causal decision theory, we say that our agent assigns probabilities to subjunctive conditionals of the form $A > (S\ \&\ U_t = U)$. That is, in causal decision theory, an agent assigns a probability to a subjective conditional that states that if a particular act $A$ is performed, then the state $S$ will obtain and the agent's current overall utility function (which is, recall, determined by certain future sub-utility functions) will be $U$. Then the definition of the value of an act becomes this:$$V_t(A) = \sum_{S \in \mathcal{S}, U \in \mathcal{U}} p(A > (S\ \&\ U_t = U))U(AS)$$ where $\mathcal{U}$ is the set of possible overall utility functions.

Thus, when I decide whether or not to become a parent, I must consider all the ways in which becoming a parent might change my sub-utilities over the remainder of my life: how it might change the ways in which I value components of outcomes momentarily and how it might change the ways I value those components enduringly. Then I must combine those with the ways in which I currently think objective value attaches to the outcomes. And I weight this combined overall utility by how confident I am that it will be the consequence of becoming a parent and I sum up the weighted values. This gives me the value of the act of becoming a parent and allows me to compare it with the value of the alternative in order to make my decision.

*epistemically transformative experiences*. In this post, I'd like to turn to another challenge to standard decision theory that Paul considers. This is the challenge from what she calls*personally transformative experiences*. Unlike an epistemically transformative experience, a personally transformative experience need not teach you anything new, but it does change you in another way that is relevant to decision theory---it leads you to change your utility function. To see why this is a problem for standard decision theory, consider my presentation of naive, non-causal, non-evidential decision theory in the previous post.In this simple decision theory, we model a decision problem as follows:

- $\mathcal{S}$ is a set of propositions each of which describes a different possible state of the world;
- $\mathcal{A}$ is a set of propositions each of which describes a different possible action that our agent might perform and states that in fact she does perform that action;
- $U$ is a function that takes a conjunction of the form $AS$, where
$A$ is in $\mathcal{A}$ and $S$ is in $\mathcal{S}$, and returns the
utility that the agent would obtain were that conjunction to hold---that
is, the utility she would obtain if she were to perform that action in
that world (we call such conjunctions the
*outcomes*); - $p$ is a subjective probability function (or probabilistic credence function) over the states of the world $S \in \mathcal{S}$.

*value**of an act $A$ relative to $p$ and $u$*as follows:$$V(A) := \sum_{S \in \mathcal{S}} p(S)U(AS)$$ Finally, we can state the main decision rule of simple decision theory:

**Maximize value**If there is an action with maximal value and $A$ does not have maximal value, then choosing $A$ is irrational.In this statement of the decision rule, it gives a requirement of rationality. But it can also be stated as advice-giving:

*Don't choose an action if it has less value than an action of maximal value*. It is this advice-giving principle that is the primary target of Paul's challenges.## Decision theory as the standard of justification of choices

Indeed, it might be worth dwelling a little on what exactly is the target of Paul's two challenges to standard decision theory. For many decision theorists, simple decision theory has two components:

- The first is a set of axioms that govern an agent's preference ordering on the set of acts.
- The second is a representation theorem. This shows that, if the set of acts has sufficiently rich structure with respect to the set of states, a preference ordering on the set of acts satisfies the axioms if, and only if, it coincides with the ordering of the acts by their value relative to a unique probabilistic credence function and a utility function unique up to affine transformation.

However, these justificatory or reason-giving tasks are clearly important. Whether or not we do it under the banner of decision theory, the problem of saying how we ought to go about them is a problem with which we should grapple. What's more, it seems natural to say that the value function $V$, which combines a credence function and a utility function but is used in traditional decision theory merely to represent an agent's preferences, suggests a way in which an agent might determine or justify or give reasons for the preferences she has. If I ask you to justify your choice of act $A$ over act $B$, you might say that you chose $A$ instead of $B$ because you preferred $A$ to $B$. But this clearly gives rise to a further request for justification: please justify your preference for $A$ over $B$. And, to answer that, it might seem natural to say the following: you value a particular outcome to such-and-such a degree, and you value another outcome to so-and-so a degree, and so on; and you believe of one possible state of the world that it is actual to such-and-such a degree, and of another that it is actual to so-and-so a degree, and so on; and you determine the value of $A$ and of $B$ by combining these credences (degrees of belief) and utilities (degrees of value) to give the overall value of $A$ and $B$; and you use those values to determine your preferences. Thus, as well as providing a mathematical representation of consistency conditions on preferences, you might naturally think that the value function $V$ plays an important role in the justifications we give of our preferences and thus our choice behaviour. If you do, then it is to you that L. A. Paul addresses her challenges: How do you determine the value of $V$ for an act that gives rise to an epistemically transformative experience? And how do you determine the value of $V$ for an act that gives rise to a personally transformative experience? If you can't do it in either case, then our best account of how we justify choice behaviour or give reasons for our choices fails in that case.

It is worth noting that, while Paul's challenges are serious challenges to this conception of decision theory, they do not challenge the traditional conception of decision theory: that is, they do not provide counterexamples to the consistency axioms on preference orderings that traditional decision theorists takes to be the substantive content of decision theory. (Thanks to Greg Wheeler for pushing for clarity on this point in discussion of the previous post.)

## Decision-making in the presence of utility change

Now, suppose you must choose between a range of possible actions; but you know that the utilities that you assign to the outcomes of those actions---the conjunctions $AS$ for action $A$ and state of the world $S$---will soon change. For instance, you assign higher utility to each of the outcomes of one action now than to the corresponding outcomes of another action; but you know that, in two hours, you will assign different utilities and those orderings will be reversed. Which option should you choose? Which utility function should you use to compute the value of each of the acts: your current utility function or the utility function you know you will have in two hours? As we will see below, there are cases in which it seems that you should use one and other cases in which it seems you should use the other; and there are further cases in which it seems you should consult both.

That is the general form of the problem. If you know your utilities will change, standard decision theory gives no advice how to choose which utilities to use when you decide between actions now. How does this relate to personally transformative experiences? The general case considered above does not refer to experiences that change your utilities; it merely asks what to do if you know your utilities will change---and, as we will see below, these changes need not come about as a result of personally transformative experiences. Essentially, personally transformative experiences give rise to a particular version of the general problem; and it is a particular version that affects causal decision theory: How should you choose if you know that some actions will give rise to personally transformative experiences that will change your utilities? In particular, can it ever be rational to choose to change your utilities?

I'll begin by considering the problem of utility change as it arises for simple decision theory and I'll propose a solution. Then I'll say how that solution might be adapted to the case of causal decision theory.

## Three examples of decision-making in the presence of utility change

The following three examples will help to motivate the account I will offer:**Disney**Disney's water parks are amongst your favourite holiday destinations. You assign extremely high utility to spending your vacation at one of them. Disney has just built a new water park that is said to be better than all the others. Unfortunately, the waiting list for tickets is 20 years long and costs £100 to join. You know that, in 20 years, you will have changed your mind about Disney water parks---by that time, you will assign very low utility to spending your vacation at one of them. Should you pay the £100 and join the waiting list?**Friendship**You have just embarked on a new friendship. You know that, over time, this will become an extremely important friendship to you: your new friend sees the world as you do, and yet she challenges you in ways that you value. Thus, while you already value her friendship and assign to it high utility, you know that, in 20 years, if you remain living in the same city, you will come to assign it extremely high utility. You also know that she will stay in the city and will not move. You are now offered a choice: the person who currently occupies your dream job will retire in 20 years and she wants to anoint her successor now. The job is in another city in a country on the other side of the world. If you accept, you will move in 20 years to this other city to take up the job; in this case, your friendship---which, by that time, you will have come to value greatly---will wither. So currently, you value the 20 years of friendship plus the rest of your life in your dream job more than a lifetime of the friendship; but you know that, in 20 years, your utilities will be reversed. Should you accept the job?**Immigration**Depressing though it is, you anticipate that your political opinions---particularly on the topic of immigration---will shift from liberal to illiberal in the next 20 years. You will tread what the British playwright Alan Bennett describes as "that dreary safari from left to right". You can choose now the immigration policy that will be implemented in your country in 20 years. What should you choose?What are our reactions to these examples? Here are mine:

- In
**Disney**, you shouldn't join the waiting list. However much you would currently value a trip to this amazing new park, you will not at the time you would in fact attend it. That is, you should choose using your future utilities alone. - In
**Immigration**, you should choose a liberal future immigration policy. However strongly you believe that you will shift from left to right, you currently not only don't share those future utilities, you in fact reject them completely. Unlike in**Disney**, this rejection involves counting those future utilities as incorrect. You don't think there is any correct or incorrect utilities concerning Disney water parks---either your current high utility or your future low utility are permissible. That's not the case in**Immigration**. Your take your current utilities to be mandatory and your future ones to be impermissible. Thus, you should choose in line with your current utilities. - In
**Friendship**, I think the question is murkier. It seems that both utility functions are important: your current utility function, which ranks 20 years of friendship and the dream job higher than a lifetime of the friendship, and your future utility function, which reverses those utilities. And it seems that the length of time for which you will have these two utility functions matters a lot: the longer you will have the second utility function---the one that values the friendship so highly---the more I am inclined to say that you should decline the job and commit to the friendship for life.

My proposal is based on the claim that there are in fact three different ways in which, at a given time, we assign value to an entire outcome---I will call these

*sub-utility assignments*and the three functions that record these assignments I will call*sub-utility functions*. Each sub-utility function records the values we attach to a different sort of component of that outcome; and we determine our total utility for an outcome at a time by combining in a particular way the different sub-utilities we have for that outcome at the time in question, but also at future times. Thus, at any given time $t$, we have three sub-utility functions: our*objective sub-utility function*at $t$, which I will denote $OU_t$; our*momentary subjective sub-utility function*at $t$, which I will denote $MSU_t$; and our*enduring subjective sub-utility function*at $t$, which I will denote $ESU_t$. Each assigns a real number to an outcome.To illustrate the idea, consider an outcome in which you attend two charity Marmite-eating events with a friend in support of a worthy cause: one occurs now; the other in 20 years. Then your three

*current*sub-utility functions assign utility to this outcome as follows:- The component of that outcome that consists of the occurrence of two charity events in support of a worthy cause is assigned value only by your
*objective*sub-utility function---it is assigned what you take to be the objective utility of those two morally relevant events occurring. - The component of the outcome that consists of sharing two evenings with a friend will be assigned value only by your
*enduring*subjective sub-utility function---each evening will contribute equally to give this enduring subjective sub-utility since, even though the second event will occur much later in your life, you currently assign positive subjective utility to living a life that includes that second event independently of whether it is currently occurring. - The component of the outcome that consists of the two aesthetic experiences of eating Marmite will be assigned value only by your
*momentary*subjective sub-utility function---the first experience will be assigned exactly the subjective utility you currently assign to such an aesthetic experience; but the second will contribute no utility at all; that is, the momentary subjective sub-utility will be as if the second Marmite-eating event did not occur. The reason is that, since the second event is not currently occurring and since you do not assign any utility to living a life that includes such an event independently of whether it is currently occurring, the occurrence of the second event is not subjectively valuable from your current point of view. Unlike friendship, the aesthetic experience of eating Marmite is not something we value other than as a currently occurring experience. Of course, if we were to ask about the momentary subjective sub-utility I assign to this outcome in 20 years, when the second Marmite-eating event is in full swing, it will be the aesthetic experience of this second event that will contribute, since it will be occurring at that time, and the aesthetic experience of the first event will not contribute at all, since it will not be occurring at that time.

**Disney**,**Friendship**, and**Immigration**:- Consider the outcome in
**Disney**in which I buy the ticket and will attend the new water park in 20 years. I assign no objective sub-utility to this outcome---the experience of attending a Disney water park is not something to which objective value attaches. I also assign no enduring subjective utility to it---the experience is valuable at the time it is experienced, but I assign no value to living a life that contains such an experience unless the experience is currently occurring. Moreover, I currently assign no momentary subjective utility to the experience, since the event is not occurring. But now consider my sub-utility functions in 20 years: again, I assign no objective or enduring subjective sub-utility to this outcome, for the same reason as above; but now, of course, I do assign a momentary utility to it---I assign whatever will be my subjective utility of attending a Disney water park at that time. - Consider the outcome in
**Friendship**in which I decline the job and commit to a lifelong friendship. I currently assign no objective sub-utility to this outcome---again, it is not something to which objective value attaches. But I do currently assign enduring subjective utility to it---indeed, I currently assign it less enduring subjective utility than I assign to the outcome in which I accept the job and abandon the friendship after 20 years. I assign it no momentary subjective utility. Now consider the sub-utilities I assign in 20 years. Again, I assign no objective or momentary subjective utility. But I do, at that time, assign enduring subjective utility---indeed, at that time, I assign the lifelong friendship greater enduring subjective utility than I assign to the outcome in which I take up the job. - Consider the outcome in
**Immigration**in which I choose the illiberal immigration policy. I currently assign no subjective utility to this, either enduring or momentary. But I currently assign very low objective sub-utility to it. Nonetheless, in 20 years, I will assign very high objective sub-utility to it. And I will continue to assign no subjective utility of either sort to it.

The first thing to notice is that, when we plug this account of total utility into the definition of the value of an act, and then apply the

**Maximize Value**principle from above, we get exactly the advice that we think we should get in each of the test cases,**Disney**,**Friendship**, and**Immigration**.- Because it is only your
*current*assessment of the objective value of an outcome that contributes to its overall utility, the principle advises you to choose in line with those current assessments in cases like**Immigration**---that is, it advises you to choose the liberal policy for your country. - In the case of
**Disney**, by contrast, it advises you to choose in line with your future utilities since none of your current or future selves assign any objective or enduring subjective utility to the outcome in which you attend a water park in 20 years; and none of your current or future selves assign any momentary subjective utility to that outcome,*except the self in 20 years who will actually be attending the water park*, and they assign very low utility. Thus, in**Disney**, it is that future self who determines the choice. - In
**Friendship**, as we anticipated, things are murkier. This is because being engaged in friendships and occupying dream jobs are the sorts of components of an outcome that we value at a time even if they aren't occurring at that time: I currently value living a life that will include a lifelong friendship, even if the friendship has yet to begin; I currently value living a life that will include occupying a dream job, even if I have yet to take up that job. This contrasts with attending a Disney water park or eating Marmite: at a given time, I don't value living a life that includes those components unless I am experiencing them at that time. Thus, in**Friendship**, the degree to which I currently value 20 years of friendship followed by my dream job contributes to the overall utility of that outcome; but so does the degree to which I will value it in 20 years. And similarly for the outcome in which I enjoy the lifelong friendship of this person without my dream job. Thus, whether or not my overall utility for one option is greater than for the other will be determined by how quickly I will change how much I value the friendship and how much that value will increase. The more quickly I come to value this new friendship very highly and thus have more future selves assigning to it very high enduring subjective utility, and fewer future selves assigning to it merely high enduring subjective utility, the higher the overall utility will be of the outcome that includes lifelong friendship, and the more likely it is that the overall utility of that outcome is greater than the overall utility of the alternative outcome in which I abandon the friendship for the job.

So much for defining the utility $U_t$ in terms of $OU_t$, $MSU_t$, and $ESU_t$. How then are we to define the value of an act at a time in terms of $U_t$? We might simply define it as above:$$V_t(A) = \sum_{S \in \mathcal{S}} p_t(S)U_t(AS)$$ However, as in the previous post, we must recognise that there might be uncertainty about the overall utility $U_t$ that arises from uncertainty about the sub-utilities that comprise it. For instance, in one outcome, I might be undergo an epistemically transformative experience at a later time. Now my current overall utility for that outcome depends on the momentary and enduring subjective sub-utilities I assign to it at later times in my life, such as the time that I have the epistemically transformative experience. Thus, if I am uncertain of what they will be, I am uncertain of my overall utility. I will solve the problem of uncertainty about utilities in a way that is slightly different from the way I solved it in the previous post. Here, I assume that we assign probabilities not just to states of the world, but to states of the world together with sub-utility profiles, which specify an objective sub-utility function for the present time, and momentary and enduring subjective sub-utility functions for each moment of time in our future life: thus, we assign probabilities to conjunctions of the form $S\ \&\ U_t = U$ where this says that state $S$ obtains and I have overall utility function $U$ at $t$. Once we do this, we can define the value of an act at $t$ as follows:$$V_t(A) = \sum_{S \in \mathcal{S}} p(S\ \&\ U_t = U)U(AS)$$

## Personally transformative experiences and causal decision theory

So far, we have discussed cases in which you know that your (sub-)utilities will change over time regardless of which action you choose. However, the cases that particularly interest Paul are cases in which we must choose between acts at least one of which might help to bring about (sub-)utility change. That is, she is interested in the problem of utility change as it affects causal decision theory. Thus, Paul is interested in what Edna Ullmann-Margalit calls "big decisions" (Ullmann-Margalit 2006). For instance, the decision to emigrate to another country is a big decision in Ullmann-Margalit's terminology, as is the decision to have a child or change career or take up with a new group of friends. In Paul's terminology, what makes these decisions big decisions is that they will give rise to personally transformative experiences: these are experiences that lead you to change your (sub-)utilities: when you emigrate to another country, it may be that a range of experiences (becoming immersed in that country's culture, learning its history, becoming friends with its citizens, etc.) gradually shift your utilities until, in a matter of years, they are quite different from your utilities before the move; similarly, but more acutely, when you become a parent, your initial experiences of holding your child and beginning to care for them changes your utilities dramatically. Personally transformative experiences certainly provide one way in which utilities might change. However, the problem of (sub-)utility change for decision theory is more general, since (sub-)utilities can change for other reasons as well: they might simply drift over a long period of time, or they might suddenly change in an instant, but not because of a particular experience.To move from the simple decision theory we've been considering so far to causal decision theory, we need make only one change: in the original formulation of simple decision theory, we said that an agent assigns probabilities at states of the world; in the previous section, we amended that so that they assign probabilities to conjunctions of states of the world together with a claim about their current overall utilities; in causal decision theory, we say that our agent assigns probabilities to subjunctive conditionals of the form $A > (S\ \&\ U_t = U)$. That is, in causal decision theory, an agent assigns a probability to a subjective conditional that states that if a particular act $A$ is performed, then the state $S$ will obtain and the agent's current overall utility function (which is, recall, determined by certain future sub-utility functions) will be $U$. Then the definition of the value of an act becomes this:$$V_t(A) = \sum_{S \in \mathcal{S}, U \in \mathcal{U}} p(A > (S\ \&\ U_t = U))U(AS)$$ where $\mathcal{U}$ is the set of possible overall utility functions.

Thus, when I decide whether or not to become a parent, I must consider all the ways in which becoming a parent might change my sub-utilities over the remainder of my life: how it might change the ways in which I value components of outcomes momentarily and how it might change the ways I value those components enduringly. Then I must combine those with the ways in which I currently think objective value attaches to the outcomes. And I weight this combined overall utility by how confident I am that it will be the consequence of becoming a parent and I sum up the weighted values. This gives me the value of the act of becoming a parent and allows me to compare it with the value of the alternative in order to make my decision.

## References

- Paul, L. A. (forthcoming)
*Transformative Experience*Oxford University Press - Ullmann-Margalit, Edna (2006) 'Big Decisions: Opting, Converting, Drifting'
*Royal Institute of Philosophy Supplement*58:157-72

Hi Richard,

ReplyDeleteThis is a very interesting way to model the problem. I like the idea that, by distinguishing between different kinds of utility, we may be able to develop a decision rule that allows a person to determine how to adjudicate between utilities of a person’s different time slices. Thanks again for engaging with the project in such a productive and creative way.

One thing I’d like to hear more about is how your model tells us what to decide when we are considering the possibility of becoming a kind of person that we currently don’t want to become, but once we have become that new kind of person we will then value being that kind of person.

For example, take a case where a childless person assigns a positive momentary value to being childless and a negative momentary value to being a parent, and a positive enduring value to being childless and a negative enduring value to being a parent. But let us assume that having a child will flip her momentary and enduring utility assignments. Once she has a child, she’ll assign a negative momentary value to being childless and a positive momentary value to being a parent, and a negative enduring value to being childless and a positive enduring value to being a parent. Moreover, since parenthood is for life, assume she’ll be in the latter state for much longer than the former.

Would you say that, even right now, when the childless person doesn’t have a child, and, say, even defines herself in part as being a happily child-free person, nevertheless, she should have a child? I think you would: that is, if you are this childless person, the rational thing to do is to reject your current self and your current values and replace your self with the (parental) self that you don’t want to become.

One reason why I find this response interesting is because, again, as with your first solution to the decision problem, the model you propose has disturbing personal and existential consequences. As a consequence, as I noted in my reply to your first post, we find ourselves in a dilemma: we can choose the horn that preserves decision theory, but this horn fails to preserve a central way we want to think about how to make big life decisions. Or, we can choose the horn that preserves how we want to make big life choices, that is, we can choose the horn that allows us to make them by imaginatively assessing our possible futures and choosing in accordance with the kind of person we take ourselves to be now or currently want to become, but choosing this horn means we cannot preserve rational decision-making.

And so, as before, I maintain that we face a dilemma: if, in order to preserve first personal rational decision-making for big decisions, we give up on what we care about when we make these big choices, then we give up on the existential dimension of choice. That is, we give up on the dimension of personal choice where, from the first personal perspective, an individual assesses her possible futures and chooses which future she wants to occupy in accordance with who, intrinsically and authentically, she takes herself to be. And, for those who want to grasp the first horn, once we give up on the existential dimension of choice, what is left of the normativity that was supposed to be captured by normative decision theory?

Thanks very much for this, Laurie. In the case you consider, I don't think my proposed decision theory would compel you to have a child. The way you describe it, you currently assign (say) 1 momentary utile to being childless and 1 enduring utile to being childless; and -1 momentary utile to having a child and -1 enduring utile to having a child; and these will be reversed if you choose to have a child. But in that situation, the act of having a child will have the same overall utility as the outcome of remaining childless. After all, having a child will change your utilities so that you assign 1 momentary utile and 1 enduring utile to each moment of the life you'll lead if you perform that act. But remaining childless will keep your utilities the same so that you'll assign 1 momentary utile and 1 enduring utile to each moment of the life you'll lead if you perform *that* act. So, according to my decision theory, there will be nothing to tell between them -- you'll be indifferent between them.

ReplyDeleteHaving said that, my decision theory does recommend having a child in the following situation. Suppose you currently assign 1 momentary and 1 enduring utile to being childless and -1 momentary and -1 enduring utile to having a child. But you know that, if you have a child, you will assign 2 momentary utiles and 2 enduring utiles to having a child and -2 momentary and -2 enduring utiles to being childless. Then, in that situation, you should have a child. But that seems right: if you can change your utilities in such a way that you know that you will value the life you will thereby live more than you will value the alternative if you don't change your utilities, then you should choose to change your utilities. That seems right to me. What do you think?

Whoops, you are right, I shouldn’t have said that the enduring utilities are merely flipped. So let me accept your redescription of the case as one where the very act of having the child revises your utilities in substantial ways (more substantial than just flipping them).*

DeleteMy conclusion is the same. That is, the problem I have with this result is that, in order to maximize expected utility (according to this decision theory), you have to replace your current self with a new kind of self.

The childless person, in this example, when she is faced with her choice, is happily child-free and has no (first order) desire to become a parent. She just doesn’t want kids. But she also wants to be rational.

I see this as a dilemma, and that’s what I meant by “we can choose the horn that preserves decision theory, but this horn fails to preserve a central way we want to think about how to make big life decisions. Or, we can choose the horn that preserves how we want to make big life choices, that is, we can choose the horn that allows us to make them by imaginatively assessing our possible futures and choosing in accordance with the kind of person we take ourselves to be now or currently want to become, but choosing this horn means we cannot preserve rational decision-making.”

*There’s more to say about how we are supposed to “know” how our utilities will change. I may come back to this later.