Self-recommending decision theories for imprecise probabilities
A PDF version of this post is available here.
The question of this blogpost is this: Take the various decision theories that have been proposed for individuals with imprecise probabilities---do they recommend themselves? It is the final post in a trilogy on the topic of self-recommending decision theories (the others are here and here).
![]() |
One precise kitten and one imprecise kitten |
First, imprecise probabilities (sometimes known as mushy credences; for an overview, see Seamus Bradley's SEP entry here). For various reasons, some formal epistemologists think we should represent an individual's beliefs not by a precise probability function, which assigns to each proposition about which they have an option a single real number between 0 and 1, but rather a set of such functions. Some think that, whatever rationality requires of them, most individuals simply don't make sufficiently strong and detailed probabilistic judgments to pick out a single probability function. Others think that, at least when the individual's evidence is very complex or very vague or very sparse, rationality in fact requires them not to make judgments that pick out just one function. Whatever the reason, many think we should represent an individual's beliefs by a set
Second, decision theories for an individual with imprecise probabilities. Suppose you have opinions only about two possible worlds
Third, self-recommending decision theories. Suppose you are unsure what decision problem you're about to face. Indeed you think each possible decision problem is equally likely. Then you can use any decision theory to pick between the available decision theories that you might use when faced with whichever decision problem arises. A decision theory is self-recommending if it says that it's permissible to pick itself.
Let's meet the decision theories for imprecise probabilities. This list may well not be comprehensive, but I've tried to identify the main ones (thanks to Jason Konek for an impromptu tutorial on
Suppose
Global Dominance
and and and
Maximality
Notice that the difference between
Next, let me specify the situation in which we're testing for self-recommendation a little more precisely. We imagine that there's a maximum utility that you might receive in the decision problem, let's say
For all of the decision theories we've mentioned, they will sometimes permit more than one option: for instance, Global Dominance permits an option
Now, let
- the maximum expected utility of using Global Dominance with
is less than the minimum expected utility of using Expected Utility with ; - the minimum expected utility of using
-Maximin with is less than the minimum expected utility of using Expected Utility with ; - the minimum expected utility of using
-Maxi is less than the minimum expected utility of using Expected Utility with and the maximum expected utility of using -Maxi is less than the maximum expected utility of using Expected Utility with ; - the weighted average of the minimum and maximum expected utilities of using
-Hurwicz with is less than the weighted average of the minimum and maximum expected utilities of using Expected Utility with ; - for all
in , the expected utility of using -admissibility with is less than the expected utility of using Expected Utility with ; - for all
in , the expected utility of using Maximality with is less than the expected utility of using Expected Utility with .
¡Sí! Finalmente alguien escribe sobre la tienda web. marcado como favorito!
ReplyDelete