This was posted originally at the OUPBlog. This is a first in a series of cross-posted blogs by Roy T Cook (Minnesota) from the OUPBlog series on Paradox and Puzzles.
The Liar paradox arises via considering the Liar sentence:
L: L is not true.
and then reasoning in accordance with the:
“Φ is true if and only if what Φ says is the case.”
Along similar lines, we obtain the Montague paradox (or the “paradox of the knower“) by considering the following sentence:
M: M is not knowable.
and then reasoning in accordance with the following two claims:
“If Φ is knowable then what Φ says is the case.”
“If Φ is a theorem (i.e. is provable), then Φ is knowable.”
Put in very informal terms, these results show that our intuitive accounts of truth and of knowledge are inconsistent. Much work in logic has been carried out in attempting to formulate weaker accounts of truth and of knowledge that (i) are strong enough to allow these notions to do substantial work, and (ii) are not susceptible to these paradoxes (and related paradoxes, such as Curry and Yablo versions of both of the above). A bit less well known that certain strong but not altogether implausible accounts of idealized belief also lead to paradox.
The puzzles involve an idealized notion of belief (perhaps better paraphrased at “rational commitment” or “justifiable belief”), where one believes something in this sense if and only if (i) one explicitly believes it, or (ii) one is somehow committed to the claim even if one doesn’t actively believe it. Hence, on this understanding belief is closed under logical consequence – one believes all of the logical consequences of one’s beliefs. In particular, the following holds:
“If you believe that, if Φ then Ψ, and you believe Φ, then you believe Ψ.”
Now, for such an idealized account of belief, the rule of B-Necessitation:
“If Φ is a theorem (i.e. is provable), then Φ is believed.”
is extremely plausible – after all, presumably anything that can be proved is something that follows from things we believe (since it follows from nothing more than our axioms for belief). In addition, we will assume that our beliefs are consistent:
“If I believe Φ, then I do not believe that Φ is not the case.”
So far, so good. But neither the belief analogue of the T-schema:
“Φ is believed if and only if what Φ says is the case.”
nor the belief analogue of Factivity:
“If you believe Φ then what Φ says is the case.”
is at all plausible. After all, just because we believe something (or even that the claim in question follows from what we believe, in some sense) doesn’t mean the belief has to be true!
There are other, weaker, principles about belief, however, that are not intuitively implausible, but when combined with B-Closure, B-Necessitation, and B-Consistency lead to paradox. We will look at two principles – each of which captures a sense in which we cannot be wrong about what we think we don’t believe.
The first such principle we will call the First Transparency Principle for Disbelief:
“If you believe that you don’t believe Φ then you don’t believe Φ.”
In other words, although many of our beliefs can be wrong, according to TPDB1 our beliefs about what we do not believe cannot be wrong. The second principle, which is a mirror image of the first, we will call the Second Transparency Principle for Disbelief:
“If you don’t believe Φ then you believe that you don’t believe Φ.”
In other words, according to TPDB2 we are aware of (i.e. have true beliefs about) all of the facts regarding what we don’t believe.
Either of these principles, combined with B-Closure, B-Necessitation, and B-Consistency, lead to paradox. I will present the argument for TPBD1. The argument for TPDB2 is similar, and left to the reader (although I will give an important hint below).
Consider the sentence:
S: It is not the case that I believe S.
Now, by inspection we can understand this sentence, and thus conclude that:
(1) What S says is the case if and only if I do not believe S.
Further, (1) is something we can, via inspecting the original sentence, informally prove. (Or, if we were being more formal, and doing all of this in arithmetic enriched with a predicate “B(x)” for idealized belief, a formal version of the above would be a theorem due to Gödel’s diagonalization lemma.) So we can apply B-Necessitation to (1), obtaining:
(2) I believe that: what S says is the case if and only if I do not believe S.
Applying a version of B-Closure, this entails:
(3) I believe S if and only if I believe that I do not believe S.
Now, assume (for reductio ad absurdum) that:
(4) I believe S.
Then combining (3) and (4) and some basic logic, we obtain:
(5) I believe that I do not believe S.
Applying TPDB1 to (5), we get:
(6) I do not believe S.
But this contradicts (4). So lines (4) through (6) amount to a refutation of line (4), and hence a proof that:
(7) I do not believe S.
Now, (7) is clearly a theorem (we just proved it), so we can apply B-Necessitation, arriving at:
(8) I believe that I do not believe S.
Combining (8) and (3) leads us to:
(9) I believe S.
But this obviously contradicts (7), and we have our final contradiction.
Note that this argument does not actually use B-Consistency (hint for the second argument involving TPDB2: you will need B-Consistency!)
These paradoxes seem to show that, as a matter of logic, we cannot have perfectly reliable beliefs about what we don’t believe – in other words, in this idealized sense of belief, there are always things that we believe that we don’t believe, but in actuality we do believe (the failure of TPDB1), and things that we don’t believe, but don’t believe that we don’t believe (the failure of TPDB2). At least, the puzzles show this if we take them to force us to reject both TPDB1 and TPDB2 in the same way that many feel that the Liar paradox forces us to abandon the full T-Schema.
Once we’ve considered transparency principles for disbelief, it’s natural to consider corresponding principles for belief. There are two. The first is the First Transparency Principle for Belief:
“If you believe that you believe Φ then you believe Φ.”
In other words, according to TPD1 our beliefs about what we believe cannot be wrong. The second principle, again is a mirror image of the first, is the Second Transparency Principle for Belief:
“If you believe Φ then you believe that you believe Φ.”
In other words, according to TPB2 we are aware of all of the facts regarding what we believe.
Are either of these two principles, combined with B-Closure, B-Necessitation, and B-Consistency, paradoxical? If not, are there additional, plausible principles that would lead to paradoxes if added to these claims? I’ll leave it to the reader to explore these questions further.
A historical note: Like so many other cool puzzles and paradoxes, versions of some of these puzzles first appeared in the work of medieval logician Jean Buridan.