Friday, 29 April 2011

Validity and Truth-Preservation

The weak theory of validity contains Peano arithmetic, $PA$, plus the scheme (V-Out), and the weak introduction rule (V-Intro). But this theory is consistent, as it already lives inside Peano Arithmetic. Can one consistently add to it the following two truth-theoretic principles?
(a) $\mathbf{Val}(\ulcorner A \urcorner, \ulcorner B \urcorner) \rightarrow (\mathbf{T}(\ulcorner A \urcorner) \rightarrow \mathbf{T}(\ulcorner B \urcorner))$.
(b) $\mathbf{T}(\ulcorner A \rightarrow B \urcorner) \rightarrow (\mathbf{T}(\ulcorner A \urcorner) \rightarrow \mathbf{T}(\ulcorner B \urcorner))$.
The first says that valid inferences preserve truth. The second says, in effect, that Modus Ponens preserves truth. (N.B., everything here is classical first-order logic.) There is a consistent truth extension of $PA$ including axioms:
(i) $\forall x (\mathbf{Prov}_{log}(x) \rightarrow \mathbf{T}(x))$
(ii) $\forall x \forall y(\mathbf{T}(x \dot{\rightarrow} y) \rightarrow (\mathbf{T}(x) \rightarrow \mathbf{T}(y)))$.
These are two of the (three) truth-involving axioms of the theory called $Base_{T}$ in H. Friedman & M. Sheard 1987, "An Axiomatic Approach to Self-Referential Truth", Annals of Pure and Applied Logic 33; it is mentioned also in Sheard's ``A Guide to Truth Predicates in the Modern Era", JSL 59 (1994) and in Sheard's "Weak and Strong Theories of Truth", Studia Logica. Certain kinds of revision-theoretic models for $Base_{T}$ are described in Friedman & Sheard 1987, Sheard 2001 and in Graham Leigh's 2010 PhD thesis, "Proof-Theoretic Investigations into the Friedman-Sheard Theories and Other Theories of Truth". Such models interpret $\mathbf{T}$ using Herzberger sequences - i.e., using the revision operator.
Let $S$ be the theory in $\mathcal{L}_{\mathbf{T}}$ which has the axioms of $PA$, plus (i) and (ii) as additional axioms, with full induction on all formulas in $\mathcal{L}_{\mathbf{T}}$. $S$ is a subtheory of $Base_{T}$ and so is consistent. As noted before, $S$ already proves the (unrestricted) scheme (V-Out) $\mathbf{Val}(\ulcorner A \urcorner, \ulcorner B \urcorner) \rightarrow (A \rightarrow B)$; also, if $A \vdash B$, then $S \vdash \mathbf{Val}(\ulcorner A \urcorner, \ulcorner B \urcorner)$. So, $S$ is closed under the weak introduction rule (V-Intro) for $\mathbf{Val}$. (There is a technical subtlety here. Showing that $PA$ satisfies (V-Out) requires that $PA$ be essentially reflexive. So, one needs to be careful that (V-Out) remains provable after adding new axioms containing $\mathbf{T}$.)
Here's a demonstration that $S$ also proves (a).
1. $S \vdash \forall x (\mathbf{Prov}_{log}(x) \rightarrow \mathbf{T}(x))$ (ax (i)).
2. $S \vdash \mathbf{Prov}_{log}(\ulcorner A \rightarrow B \urcorner) \rightarrow \mathbf{T}(\ulcorner A \rightarrow B \urcorner))$ (from (1)).
3. $S \vdash \mathbf{Prov}_{log}(\ulcorner A \rightarrow B \urcorner) \leftrightarrow \mathbf{Val}(\ulcorner A \urcorner, \ulcorner B \urcorner))$ (defn of $\mathbf{Val}$).
4. $S \vdash \mathbf{Val}(\ulcorner A \urcorner, \ulcorner B \urcorner) \rightarrow \mathbf{T}(\ulcorner A \rightarrow B \urcorner))$ (from (2), (3)).
5. $S \vdash \forall x \forall y(\mathbf{T}(x \dot{\rightarrow} y) \rightarrow (\mathbf{T}(x) \rightarrow \mathbf{T}(y)))$ (ax. (ii)).
6. $S \vdash \mathbf{T}(\ulcorner A \urcorner \dot{\rightarrow} \ulcorner B \urcorner) \rightarrow (\mathbf{T}(\ulcorner A \urcorner) \rightarrow \mathbf{T}(\ulcorner B \urcorner))$ (from (5)).
7. $S \vdash \ulcorner A \urcorner \dot{\rightarrow} \ulcorner B \urcorner = \ulcorner A \rightarrow B \urcorner$ (syntax inside $PA$).
8. $S \vdash \mathbf{T}(\ulcorner A \rightarrow B \urcorner) \rightarrow (\mathbf{T}(\ulcorner A \urcorner) \rightarrow \mathbf{T}(\ulcorner B \urcorner))$ (from (6), (7).
9. $S \vdash \mathbf{Val}(\ulcorner A \urcorner, \ulcorner B \urcorner) \rightarrow (\mathbf{T}(\ulcorner A \urcorner) \rightarrow \mathbf{T}(\ulcorner B \urcorner))$ (from (4), (8)).

Tuesday, 26 April 2011

Synthese affair update

Here we go again, briefly. Based on the poll at It's only a theory, the philosophy of science community is split into one third willing to boycott Synthese and two thirds unwilling. A petition has been launched as a milder form of action.

Applicability, mixed & pure, and modality

Here are some thoughts on the applicability of mathematics. Important work on the applicability of mathematics by Quine, Putnam and Field clarified that mathematicized scientific laws, if we examine them closely, contain mixed predicates, whose interpretations are mixed relations between "concreta" and mathematical objects. A simple example of such a mixed predicate is the membership predicate $\in$. Furthermore, the mathematical objects that arise may be mixed or pure. Examples of pure mathematical objects are: natural numbers, integers, real numbers, complex numbers, infinite cardinals and ordinals; also various structures that turn up in algebra and geometry, so long as these are understood in an ante rem manner. Examples of mixed mathematical objects are sets, relations and functions whose transitive closure contains "concreta":
Pure: $7, -1, \pi, e^{i \pi}, \aleph_{57}, \omega^{\omega}, \mathbb{R}, \mathbb{Z}_3, cosine, SU(3), L^2[\mathbb{R}^3]$, etc. 
Mixed: the set of US presidents; the set of London underground stations; a 3-element graph whose nodes are {Frege, Hilbert, Noether}; the measurement scale $Mass_{kg}$; the electromagnetic field $F_{ab}$; the metric tensor $g_{ab}$, etc. 
What distinguishes the pure and mixed mathematical objects? It seems to be their modal & temporal status. The set of all people who have been US presidents now has 44 members, but (unless something weird happens) on Jan 20th, 2013, it will have either 44 or 45 members. And the set of all US presidents now in the actual world has 44 members, but it could have been different. So, there is some sense in which mixed mathematical objects, being anchored in concrete actualities or possibilities, can change, temporally and modally. But it seems right to say that pure mathematical objects don't change, temporally or modally. This is why we think that statements of pure mathematics are necessary.

There is a problem, however. It has to do with what are sometimes called "Cambridge changes". If x is my coffee cup, then x changes, by its temperature cooling from 50 Celsius to 20 Celsius between times t and t*. But can we not also say that the number 50 changed, from having the property "being the temperature-in-Celsius of cup x" at time t to not having this property at time t*? Similarly, 44 has the property of "being the number of all US presidents" on 26th April 2011, but will lose this property by Jan 20th, 2017. So, pure mathematical objects can change after all!

I'm not sure what the answer to this is, but I suspect it is connected to the "rigidity" of the pure relations between pure mathematical objects. The pure mathematical objects don't "change" relations amongst each other. But, when we consider their relations to concreta, they can "change" those relations, just as concreta can change their relations amongst each other. I may well be heavier than my colleague Dr X now, but after my diet, I will definitely be lighter. Admittedly, this is all rather unclear; and, in fact, there are some thinkers who have argued that mathematical objects (mixed and pure) contingently don't exist. Field has argued for this, and has also tried to explain the consequent apparent necessity of pure mathematics in terms of its conservativeness.

Monday, 25 April 2011

These go to 11

The closed interval $[0, r]$ of real numbers is the set $\{x \in \mathbb{R} : 0 \leq x \leq r\}$. Observation: there is an isomorphism $f : [0, 10] \rightarrow [0,11]$. For example, let $f$ be the function: $f(x) = \frac{11}{10}x$. Then $f(0) = 0$ and $f(10) = 11$. And, clearly $x < y \text{ iff } f(x) < f(y)$.
So? Well, the interval $[0,10]$ is a structure known to all guitarists, as it provides a convenient labelling of the loudness of the output of their amp. Write "$Lo_1o_2$" to mean "output setting $o_1$ is less loud than output setting $o_2$". The settings of the output can be "labelled" by real numbers, and a function which maps each setting to real numbers is a measurement scale. The sole requirement on such a function is that $m$ somehow "represents the loudness relations" of the output settings.
Suppose $m$ is such a measurement scale. Write $``m(o) = r"$ to mean "$m$ assigns real number $r$ to the output setting $o$". The representation condition is then:
$m(o_1) < m(o_2) \text{ iff } Lo_1o_2$
Given the nature of the physical device itself, the possible loudness settings have a minimum and a maximum. Call these $o_{min}$ and $o_{max}$. It is convenient to choose $m$ such that:
$m(o_{min}) = 0$ and $m(o_{max}) = 10$.
All intermediate loudness settings get mapped to reals between 0 and 10.

However, it is entirely a matter of convenience that guitar amplifier manufacturers adopt this convention. (And the loudness output labelled by 10 on one amp can be very different from the loudness labelled by 10 on another amp.) Measurement scales are usually non-unqiue, and, depending on the representation conditions all such scales must satisfy, there is a class of transformations from one scale to another. So, with guitar amps, why not have a measurement scale $m^{\ast}$ such that,
$m^{\ast}(o_{min}) = 0$ and $m^{\ast}(o_{max}) = 11$?
This is possible, as noted above: there is an isomorphism $f : [0, 10] \rightarrow [0,11]$. So, we can define $m^{\ast}$ by: $m^{\ast}(o) = f(m(o))$. This scale $m^{\ast}$ is the (well, a) Spinal Tap measurement scale:

Roy mentions the xkcd cartoon on this conceptual puzzle:

More on the Validity Predicate

Jeff has posted about adding a validity predicate to arithmetic here. I have been thinking about this, and have a different twist. Assume that we add a logical validity predicate Val(x, y) to arithmetic (in what follows, F is the Godel code of formula F). Val(F, G) holds iff the argument from F to G is logically valid. Now, one rule that a logical validity predicate ought to satisfy is:

VS2: (Val(F, G) & F) entails G.

The second rule that a logical validity predicate ought to satisfy is:

VS1: If F entails G, then Val(F, G)

The trick here is that we need to decide, when applying VS1, which sense of 'entails' we have in mind. Should we conclude that Val(F, G) holds if:

  1. G is derivable from F in first-order logic?
  2. G is derivable from F in first-order logic supplemented with VS1 and VS2?
  3. G is derivable from F plus arithmetic?
  4. G is derivable from F plus the T-schemas?
  5. etc.

There are extant arguments that show that options 3 and 4 are inconstent (Beall & Murzi [unpublished] and Shapiro [2010] show this for 3, and Whittle [2004] shows this for 4). But, of course, is is rather implausible that either arithmetic or the T-schemas are logically valid (of course, their arguments do show that other interesting notions of validity are inconsistent). To remind you, the derivation of a contradiction for case 3 goes like this. Diagonalization provides a sentence P such that:

P is arithmetically equivalent to Val(P, #)

where "#" is some arbitrary contradiction. We then reason as follows:

  1. P Assumption.
  2. Val(P, #) 1, Diagonalization.
  3. # 1, 2, VS2.
  4. Val(P, #) 1 – 3, VS1.
  5. P 4, Diagonalization.
  6. # 4, 5, VS2.

As Jeff notes in his earlier contribution, 1 is consistent (in fact, we don't need to add a new predicate at all, since the relevant notion of validity is definable in PA). Thus, if 'entails' means derivable in first-order logic, then we can consistently add the rules above to arithmetic.

As a result, the intuitive rules for logical validity (unlike, strikingly, the intuitive rules for the truth predicate) are truth-preserving and consistent. But are they themselves logically valid? In other words, can we add versions of VS1 and VS2 to arithmetic where 'entails' means derivable from first-order logic plus VS1 and VS2? Interestingly, the answer is "no", as the following derivation demonstrates. Let Q be the conjunction of the axioms of Robinson arithmetic, and let jn(x, y) the recursive function mapping the Godel codes of two formulas onto the code of their conjunction. Diagonalization provides a sentence P such that:

P is arithmetically equivalent to Val(jn(P, Q), #)

We now reason as follows:

  1. P & Q Assumption.
  2. Q 1, logic.
  3. P 1, logic.
  4. Val(jn(P, Q), #) 2, 3, logic.
  5. Val(P&Q, #) 2, 4, logic.
  6. # 1, 5, VS2.
  7. Val(P&Q, #) 1 – 6, VS1.
  8. Q Assumption
  9. Val(jn(P, Q), #) 7, 8, VS2.
  10. P 8, 9, logic.
  11. P & Q 8, 10, logic.
  12. # 9, 11, VS2.

A few observations about the proof:

  • We only apply arithmetic (diagonalization in the move from 3 to 4 and from 9 to 10, recursive arithmetic in the move from 4 to 5 and from 7 to nine, since these depend on the arithmetic fact that jn(P, Q) = P&Q) within the scope of an assumption of Q. Thus, the system in which the proof occurs does not assume that arithmetic is valid (or even true).
  • The proof does not show that this version of the rules VS1 and VS2 are inconsistent. Instead, it shows that these rules allow one to prove that arithmetic is inconsistent (in this manner, the result is very different from the standard proof of the inconsistency of the T-rules).

Anyway, this is kind of cool. Just as the Liar paradox (or, if you want to be fancy, Tarski's theorem) shows that the T-sentences governing the truth predicate can't be true, this shows that the rules for the logical validity predicate can be true, but can't be logically valid.


Beall, J. & J. Murzi [manuscript], “Two Flavors of Curry Paradox”, online at:

Shapiro, Lionel, [2010], “Deflating Logical Consequence”, Philosophical Quarterly 60(*).

Whittle, B. [2004], Dialetheism, Logical Consequence, and Heirarchy, Analysis 64(4): 318 – 326.

[Edited for readability - rtc]

Okay, so Jeff K. just invited me to join the blog. I will definitely post more substantial stuff in the future, but I thought I would start with something fun. This is a short animated film about logic and philosophy I did a while back. It was posted on the Lieter blog, but just in case you haven't seen it:

Sunday, 24 April 2011

2 Become 1

One occasionally reads that arithmetic doesn't apply "exactly" because, for example, if you physically aggregate two drops of water the result is one drop of water. An example of this generic kind of thought is:
If one adds a litre of water to a reservoir a billion times one would certainly not end up with exactly a billion litres of water, because litres of water cannot be measured that accurately. In scientific experiments measurements of this type (should) always have error bounds, which quantify the degree to which simple arithmetic fails. (E.B. Davies, 2005: "Some Remarks on the Foundations of Quantum Theory", Brit. J. Phil. Sci. 56, p. 530.)
From the true premise that "adding 1 litre" a billion times does not yield a quantity of water of one billion litres, the conclusion is drawn that "simple arithmetic fails". How is this inference justified?

There are two mistakes here concerning how arithmetic is applied in ordinary reasoning. First, about what the bearers of cardinalities are. As Frege, Russell and others pointed out, cardinalities are born by sets (or classes, or concepts), not concrete lumps: for what would the cardinality of, say, Karl Kautsky be? Second, about the meaning of the addition symbol $``+"$ in such applications. Write $``n = c(X)"$ to mean "$n$ is the cardinality of the set $X$". Then addition, $``n + k = p"$, is defined as:
$\exists X, Y, Z(n = c(X) \wedge k = c(Y) \wedge p = c(Z) \wedge (X \cap Y = \emptyset) \wedge (Z = X \cup Y))$
So, addition $+$ of cardinal numbers "represents" set-theoretic union $\cup$ of disjoint sets, and not physical juxtaposition(or aggregation) of concrete lumps.

[There is different notion of addition defined for ordinals, and this does involve "concatenation" of sequences: that's how a Turing machine adds numbers. But for finite cardinals and ordinals, the addition structures are isomorphic.]

So, in Davies's argument above that "simple arithmetic fails", there was an implicit assumption that adding one litre somehow corresponds to the successor operation $S$ on $\mathbb{N}$. But this is not the case. For the successor $S$ operation, if $X$ is a finite set, then adding a distinct element $e$ to $X$ yields a set of cardinality $S(c(X))$. But this quite different from aggregating or fusing a new concrete thing with some other.

Of course, two concrete things can be fused to form one concrete thing. But indeed the union of two sets is one thing. This is just functional application: $f(A,B)$ is just one thing, by definition, despite the operation taking two arguments.

On this topic, a Spice Girls video: "2 Become 1",

[Update, 25th April: I edited the post, with an example.]

Saturday, 23 April 2011

If Six Was Nine

"If Six Was Nine" is the name of a Jimi Hendrix song from Axis: Bold as Love (1967). How could six have been nine? Hendrix's title plays on the symmetry of the Arabic numerals, "$6$" and "$9$": each is obtained by rotation of the other through 180 degrees. But the possibility of converting a representation $r$ to a representation $r^{\circ}$ doesn't automatically correspond to some important relation between what they refer to. That's a use/mention confusion.

Following Frege and Russell, (finite) cardinal numbers are the cardinalities of (finite) sets, and cardinalities are obtained by abstraction over the equivalence relation (on sets) of equinumerousness: i.e., there is a bijection $f : A \rightarrow B$. Writing $A \sim B$ to mean this, the guiding axiom is Hume's Principle: $card(A) = card(B) \leftrightarrow A \sim B$. So, for example, $0$ is defined as $card(\emptyset)$.

Suppose that $A = \{a_1, a_2, a_3, a_4, a_5, a_6\}$, with $a_i \neq a_j$ for $i \neq j$, and $B = \{b_1, b_2, b_3, b_4, b_5, b_6, b_7, b_8, b_9\}$, with $b_i \neq b_j$ for $i \neq j$. So $card(A) = 6$ and $card(B) = 9$. But there is no injection $f : B \rightarrow A$. So, $card(A) \neq card(B)$ and therefore six isn't nine.

But could six have been nine, even though it actually isn't? I don't think so, because pure mathematical objects are modally invariant. Unlike "concreta", they don't change their properties from world to world. Concreta have "counterparts". Though Quine-in-the-actual-world $w^{\ast}$ was a logician, for some other world $w$, Quine-in$w$ was not a logician. Quine-in-$w^{\ast}$ and Quine-in-$w$ are mutual counterparts. But abstract mathematical entities like six and nine are just what they are, and couldn't have been different. Concrete worlds are like planets embedded in a fixed background mathematical universe: mathematics is the spacetime of modality.

There are ways, however, of making the linguistic representation "$6 = 9$" true, if we change the interpretation of the symbols. Suppose we have the ring $\mathbb{Z}_3$ of integers modulo three. The ring $\mathbb{Z}_n$ of integers modulo $n$ involves treating integers that differ by adding a multiple of $n$ as equivalent. We write: $p \equiv k \text{ (mod } n)$ as short for $\exists a(p = k + a \times n)$. So, for example, $1\equiv 4 \text{ (mod } 3)$. Then, if we define terms of the language so that, roughly the term "$+n$" is "$0 + 1 + 1 ... + 1$'', with $n$ occurrences of "$+1$" (and similarly for $-n$), then what corresponds to the terms "$6$" and "$9$" both refer to $0$ in $\mathbb{Z}_3$. In that sense, "$6 = 9$" is true in the structure $\mathbb{Z}_3$.

Still, the truth of "$6 = 9$" in $\mathbb{Z}_3$ isn't what is meant by wondering whether 6 could have been 9 (or 6 might be 9, even though we don't know). That question concerns whether the finite cardinal numbers 6 and 9 could have been identical, and the answer to that is no.

Unfortunately, there isn't a Youtube video of Jimi Hendrix's "If Six Was Nine", but there is an Eddie van Halen version,

Friday, 22 April 2011

Synthese: the editors' response

It's here. Unflattering reactions here and here.

(Edit: Well, some are in fact quite supportive; see Larry Laudan here.)

Thursday, 21 April 2011

Bye pi?

I don't know how many out there are aware of the controversy about pi being the "right" choice for the circle mathematical constant. If you don't know about it yet, the nice video below by Vi Hart will update you with some fun. Here is some more thorough stuff - as far as it goes, anyway. It's called "the Tau manifesto", Tau being, well, two times pi. And, unlike this, the Tau thing is meant to be for real.

Tuesday, 19 April 2011

The Synthese boycott affair

Brian Leiter has just launched a boycott of Synthese motivated by the editioral policy of the journal concerning a special issue on Evolution and Its Rivals. The proposal deserves scrutiny, I think. Leiter argues his case extensively. Have a look. Then any comment is more than welcome, of course.

Thursday, 14 April 2011

Cut Elimination and Contraction

Working with the consistency of contraction free logics with a naive truth predicate (see earlier post), I've been looking at a paper by Uwe Petersen (2000), 'Logic Without Contraction as Based on Inclusion and Unrestricted Contraction'. The paper proves the consistency of a set theory over BCK (see Ono & Komori 1985 for details of the system) with unrestricted comprehension. (As was already proved earlier in Grishin (1982) and White (1987), set theory with unrestricted comprehension and the axiom of extensionality, however, is inconsistent.) The hope is that such a consistency result using proof theoretic techniques can be carried over to a formal theory of (naive) truth. For a typed truth predicate, we already Volker Halbach (1999) which gives a cut elimination theorem to show consistency for a classical theory over PA (details can also be found in his new book).

The critical aspect of making cut elimination work in a system with naive truth is that there is, unlike in Halbach's system of typed truth, no guarantee that the relevant reduction steps will decrease the complexity of the cut formula. This is simply because with naive truth, the T-right rule can be applied to a formula of any complexity, including, say, the Liar sentence L. (For a discussion of this point, see Kremer 1988.) However, I came across the following promising observation by Petersen:

This is why the strategy employed in cut elimination of shifting cuts 'upwards' cannot rely on a decrease of the length of the cut formula. There is, however, no need for a decrease of the length of the cut formula, if contraction is not available; an induction on the length of the number of logical inferences is sufficient. (Petersen 2000: 370)

[I]n the absence of contraction, normalization and cut elimination can be proved without a recourse to the length of the formula in question (maximum or cut formula). It is this that makes logic without contraction so safe against all antinomies arising from abstraction. (ibid. 374)
In Petersen's BCK system, the upshot is that the cut elimination theorem is proved with no reference to the complexity of cut formulae. It is sufficient that pushing cuts upwards will decrease the length of the derivation, even if the cut formula increases in complexity. If contraction is around, this does not work. The reason, I think, is something like this: If the cut formula A on the succedent side is the result of an application of contraction, pushing the cut will involve cutting on two copies of A, and thus increasing the length of derivation by including the corresponding subderivation of A on the antecedent side twice.

Petersen's observation is pretty neat, but I'm still not sure how general it is. Contraction is tricky, because even if the system has no explicit contraction rule, it might have an admissible contraction rule (as in the G3c system of Troelstra & Schwichtenberg). I'll have to defer to the experts here: Where should I look for a systematic development of this point? Is this observation common in the substructural literature?

(Cross-posted on The Hidden Abacus.)

Tuesday, 12 April 2011

"There is a set of Fs" implies only logical truths

Nominalists and anti-nominalists disagree about whether there are, for example, sets of things, where the things in question may be "concrete". The sentence "there is a set of chairs" is an example of a mixed mathematical claim. It uses the (presumably) non-mathematical predicate "x is a chair" as the defining formula $\varphi(x)$ in an instance of the Comprehension Scheme:
  • $\exists y \forall x(x\in y \leftrightarrow \varphi(x))$
The sentence "there is a set of chairs" cannot be true if there are no sets. So, a nominalist will conclude that instances of comprehension are not true. Even so, the nominalist, like everyone else, will want to employ such sentences in their reasoning. How is this justified? Hartry Field (1980: Science Without Numbers) argued that the justification of the use of such (believed-to-be-untrue) sentences in reasoning involves their conservativeness. Unrestricted comprehension is, of course, inconsistent. For replacing $\varphi(x)$ with $x \notin x$ yields:
  • $\exists y \forall x(x\in y \leftrightarrow x \notin x)$
which is inconsistent. (Russell's Paradox.) However, there are restricted classes of instances of comprehension which are consistent. In particular, the class of instances of comprehension where $\varphi(x)$ contains non-mathematical vocabulary. Suppose that we call sentences containing only "concrete" predicates nom-sentences; call instances of the Comprehension Scheme using only nom-formulas comp-axioms. The simplest relevant conservativeness result says:
  • If there is a derivation of a nom-sentence $B$ from a comp-axiom, there is also a derivation of $B$ from logic.
Unfortunately, the literature on this topic is quite difficult, and there are a number of different results and a number of different kinds of proofs.

Here is the simplest proof of the simplest kind of conservativeness result which will give a flavour of why comprehension axioms are conservative. We consider the simplest scenario: the nom-language $\mathcal{L}$ is a first-order language (with identity) and has only a single unary concrete predicate $Fx$ and we consider the simplest comp-axiom, $\exists y \forall x(Rxy \leftrightarrow Fx)$, where we write $Rxy$ to mean "x is an element of y". The conservativeness result now is: for any $\mathcal{L}$-sentence $B$,
  • $ \text{If } \exists y \forall x(Rxy \leftrightarrow Fx) \vdash B \text{ then } \vdash B$.
Here is a proof which shows how to convert a derivation of $B$ from $\exists y \forall x(Rxy \leftrightarrow Fx)$ to a derivation of $B$ entirely in logic. I will assume a Hilbert-style deductive system with linear derivations, some bunch of axiom schemes and the single rule Modus Ponens.

First, note that, in general, if $\exists y \varphi(y) \vdash B$, then $\varphi(y/c) \vdash B$ (where $c$ is a new constant: it is skolem constant). So, it will be sufficient to show that we can do this given a derivation $(P_0, P_1, ..., P_n)$ of $B$ from $\forall x(Rxc \leftrightarrow Fx)$. Let $\mathcal{L}(c)$ be the result of extending $\mathcal{L}$ with the new constant $c$. The main idea of the proof is that the assumption $\forall x(Rxc \leftrightarrow Fx)$ looks just like a "definition" of $Rxc$. So, the plan is to consider each formula $P_i$ in the derivation, and replace any occurrence of $Rt_1t_2$ in $P_i$ by $Ft_1 \wedge t_2 = c$ (where $t_1, t_2$ are terms). This replacement therefore eliminates the symbol $R$ (i.e., the membership predicate). Let $(P_i)^{\circ}$ be the result of making this replacement.

From the definition of "derivation", each $P_i$ is either an axiom of logic, or is the assumption formula $\forall x(Rxc \leftrightarrow Fx)$, or is obtained by Modus Ponens on previous formulas. The hope is that, after the replacements, the new sequence of formulas $((P_0)^{\circ}, ..., (P_n)^{\circ})$ is, "more or less", a derivation of B in logic.

Since $B$ does not contain the symbol $R$, the replacement makes no difference to $B$. (I.e., $B^{\circ}$ is just $B$.) Next, if $P_i$ is a logical axiom containing $R$, then replacing $Rt_1t_2$ by $Ft_1 \wedge t_2 = c$ will give a logical axiom in $\mathcal{L}(c)$. Next, if $P_i$ is the assumption formula $\forall x(Rxc \leftrightarrow Fx)$, then replacing $Rxc$ by $Fx \wedge c = c$ yields $\forall x((Fx \wedge c = c) \leftrightarrow Fx)$. But this is itself a logically derivable $\mathcal{L}(c)$-sentence. Finally, if $P_i$ is obtained by Modus Ponens from $P_j$ and $P_k$ (with $j, k < i$), we need to check that after we've made the replacements, then result is still an instance of Modus Ponens. The only thing to check is that applying the replacement to a conditional $P_j \rightarrow P_i$ yields the conditional of applying the replacements; and this is so. (I.e., that $(P_j \rightarrow P_i)^{\circ}$ is $(P_j)^{\circ} \rightarrow (P_i)^{\circ}$.) It follows then that, assuming we "paste in" the missing derivation of $\forall x((Fx \wedge c = c) \leftrightarrow Fx)$, the replacement yields a derivation of $B$ in logic alone, in the language $\mathcal{L}(c)$. However, since $B$ does not contain $c$, if there is a logical derivation of $B$ in $\mathcal{L}(c)$, then there is a logical derivation of $B$ in $\mathcal{L}$ itself. So, $\vdash B$, as required.

This is probably the simplest case of a Field-style conservativeness result for mathematical axioms over "nominalistic" sentences. One can then build-up to more complicated cases by modifying this kind of proof. E.g., to consider comp-axioms where the defining formula is $\varphi(x)$, where $\varphi(x)$ is any $\mathcal{L}$-formula (rather just the atomic formula $Fx$). Also, to consider a nom-language $\mathcal{L}$ with various primitive predicates for concreta. In these latter cases, the comp-axioms have the form $\forall x(Rxc \leftrightarrow \varphi(x))$, where $\varphi(x)$ is an $\mathcal{L}$-formula and $c$ is a constant (a new constant is needed for each formula $\varphi(x)$). The method is, again, to replace occurrences of the atomic formula $Rt_1t_2$ as they appear in a derivation by certain $\mathcal{L}(c_1, ..., c_n)$-formulas. This will transform a given derivation using comp-axioms into a derivation in logic alone (in the language $\mathcal{L}(c_1, ..., c_n)$).

"How to write proofs: a quick guide"

In introductory logic, students are asked to answer problems like,
  • Show that the formula $P \rightarrow (P \rightarrow Q)$ is equivalent to $P \rightarrow Q$.
So, the student writes down a truth table with sentence letters $P$ and $Q$, and a column $P \rightarrow (P \rightarrow Q)$ and a column for $P \rightarrow Q$ and checks that the truth values of these two columns all match. Alternatively, a student might be asked to give a formal derivation of $P \rightarrow Q$ from $P \rightarrow (P \rightarrow Q)$ and vice versa.

In intermediate logic, students are asked to answer problems like
  • Suppose $S_0$ is $P \rightarrow Q$ and $S_{n+1}$ is $P \rightarrow S_n$. Show that, for all $n$, $S_n$ is equivalent to $P \rightarrow Q$
This involves something like a genuine mathematical proof, using induction. When philosophy students step up from introductory logic to intermediate logic, they often find it challenging to come up with informal mathematical proofs of such claims. For philosophy students who do not intend to focus on theoretical philosophy, this needn't matter (though I believe that, increasingly, it will). But for advanced philosophy students who want to focus on topics in logic and parts of metaphysics, philosophy of language, mathematics and science, at some point it becomes necessary to be able to understand, and write out, informal proofs of a mathematical nature.

Here is a link to a short guide on writing proofs, for mathematics students, by Eugenia Cheng, a category theorist at The University of Sheffield.

Sunday, 10 April 2011

Arithmetic and Epistemology

This is "Can't Be Sure" by The Sundays, from their Reading, Writing and Arithmetic (1990). It was number 1 in John Peel's Festive 50 in 1989.

Saturday, 9 April 2011

Yablo's Paradox

A topic that Hannes Leitgeb, Roy Cook and I have written about is Yablo's paradox, which has an interesting subliterature associated with it; about 2 or 3 articles/year.

Instead of the usual liar paradox (a single sentence saying of itself that it is untrue), one can obtain semantic paradoxes by introducing (finite or even infinite) "loops". Yablo's idea (see Yablo 1993) is this: what if one replaces the loop by an infinite list? Yablo's paradox concerns a denumerable set of sentences $\{Y_0, Y_1, ...\}$, such that each $Y_n$ is equivalent to "for all $k > n$, $Y_k$ is not true". It's easy to see that one cannot assign truth values consistently to the $Y_n$. (As Roy mentions in the comments below, a related idea is mooted in Kripke 1975).

Two major issues arise in connection with Yablo's paradox: the question of self-reference and the phenomenon of $\omega$-inconsistency.

(A) Self-Referentiality
Is this semantic paradox self-referential? Some say "no" (Yablo); some say "yes" (Priest); some say, "it depends". The argument for "no" is that $Y_n$ is, roughly, equivalent to $\neg Y_{n+1} \wedge \neg Y_{n+2} \wedge ...$. So, the truth value of $Y_n$ doesn't "depend" on itself. Rather it "depends" on the truth values of $Y_{n+1}, Y_{n+2}$, etc. But what does "depends" mean? There is a sense in which this can be made more precise (see Leitgeb 2005), and one can formally show that each $Y_n$ is not self-referential. On the other hand, the construction of the sentences $Y_n$ requires a uniform fixed point result, saying that the predicate $Y(x)$ is equivalent (for variable $x$) to "for all $y > x$, $Y(y)$ is not true". This makes it look like the predicate $Y(x)$ is self-referential, although its instances aren't. (See Cook 2006 for more on this sort of thing.)

(B) $\omega$-Inconsistency
The paradox leads to an interesting form of $\omega$-inconsistency (first noted, I believe, by Hardy 1995), which itself is related to the non-wellfoundedness of the dependence relation (see also Forster 1996). One can reconstruct the paradox using the language of arithmetic with a primitive truth predicate $T(x)$ added. First, define a fixed-point predicate $Y(x)$ by uniform diagonalization so that $PA$ proves (with a little use/mention abuse):
  • $\forall x [Y(x) \leftrightarrow \forall y > x \neg T(\ulcorner Y(\dot{y}\urcorner)]$
Define the $n$-th Yablo sentence $Y_n$ to be $Y(x/\underline{n})$. Then add local disquotational T-sentences for each arithmetic sentence:
  • $T(\ulcorner A \urcorner) \leftrightarrow A$ (where $A$ is an arithmetic sentence)
And add the local disquotational scheme for each Yablo sentence:
  • $T(\ulcorner Y_n \urcorner) \leftrightarrow Y_n$
Call the resulting theory $PA_Y$. Compactness tells us that $PA_Y$ is consistent. Furthermore, one can prove that $PA_Y$ is an $\omega$-inconsistent conservative extension of PA. This means that $PA_Y$ has no standard model: the natural number structure $\mathcal{N}$ cannot be expanded to a model $(\mathcal{N}, E) \models PA_Y$. Still, each non-standard $\mathcal{M} \models PA$ can be expanded to a model $(\mathcal{M}, E) \models PA_Y$, by choosing $E$ (the denotation of the truth predicate) carefully. However, the $\omega$-inconsistency is "localized" entirely within the part of the language containing the truth predicate, for every arithmetic theorem of $PA_Y$ is true in $\mathcal{N}$. This relates Yablo's paradox to the phenomenon of truth theories with no standard model (see Leitgeb 2001, Ketland 2004, 2005, Barrio 2010, Picollo 2011).

Here is a temporally-ordered bibliography (which I'll update) with links:

[0] Kripke, S. 1975: "Outline of a Theory of Truth", Journal of Philosophy 72.
[1] Yablo, S. 1985: "Truth and reflection". Journal of Philosophical Logic 14.
[2] Yablo, S. 1993: "Paradox without self-reference". Analysis 53.
[3] Goldstein, L. 1994: "A Yabloesque paradox in set theory". Analysis 54.
[4] Hardy, J. 1995: "Is Yablo's paradox liar-like?". Analysis 55.
[5] Tennant, N. 1995: "On paradox without self-reference". Analysis 55.
[6] Forster, T. 1996: "The significance of Yablo's paradox without self-reference". (Unpublished: PS).
[7] Priest, G. 1997: "Yablo's paradox without self-reference". Analysis 57. (PDF)
[8] Sorensen, R. 1998: "Yablo's paradox and kindred infinite liars". Mind 107.
[9] Beall, J.C. 1999: "Completing Sorensen's menu: a non-modal Yabloesque Curry". Mind 108.
[10] Beall, J.C. 2001: "Is Yablo's paradox non-circular?". Analysis 61.
[11] Leitgeb, H. 2001: "Theories of truth which have no standard models". Studia Logica 68.
[12] Leitgeb, H. 2002: "What is a self-referential sentence? Critical remarks on the alleged (non)-circularity of Yablo's paradox". Logique et Analyse 177-8 (no online link).
[13] Bueno, O. & Colyvan, M. 2003: "Yablo's paradox and referring to infinite objects". Australasian Journal of Philosophy 81. (PDF)
[14] Bringsjord, S & van Heuveln, B. 2003: "The 'mental-eye' defence of an infinitized version of Yablo's paradox". Analysis 63.
[15] Bueno, O. & Colyvan, M. 2003: "Paradox without satisfaction". Analysis 63.
[16] Ketland, J. 2004: "Bueno and Colyvan on Yablo's paradox". Analysis 64.
[17] Cook, R.T. 2004: "Patterns of paradox". Journal of Symbolic Logic 69.
[18] Yablo, S. 2004: "Circularity and self-reference". In T. Bolander, V. Hendricks & S. Petersen (eds.) 2004, Self-Reference. (PDF)
[19] Uzquiano, G. 2004: "An infinitary paradox of denotation". Analysis 64.
[20] Ketland, J. 2005: "Yablo's paradox and $\omega$-inconsistency". Synthese 145.
[21] Leitgeb, H. 2005: "What truth depends on". Journal of Philosophical Logic 34.
[22] Leitgeb, H. 2005: "Paradox by (non-wellfounded) definition". Analysis 65.
[23] Shackel, N. 2005: "The Form of the Benardete Dichotomy". British Journal for the Philosophy of Science 56.
[24] Goldstein, L. 2006: "Fibonacci, Yablo, and the cassationist approach to paradox". Mind 115.
[25] Cook, R.T. 2006: "There are non-circular paradoxes (but Yablo's isn't one them!)". The Monist 89.
[26] Schlenker, P. 2007: "The elimination of self-reference: generalized Yablo-series and the theory of truth ". Journal of Philosophical Logic 36.
[27] Schlenker, P. 2007: "How to eliminate self-reference: a precis". Synthese 158.
[28] Bolander, T. 2008: "Self-Reference". SEP.
[29] Landini, G. 2008: "Yablo's paradox and Russellian propositions". Russell: Journal of Bertrand Russell Studies 28.
[30] Bernadi, C. 2009: "A topological approach to Yablo's paradox". Notre Dame J of Formal Logic 50.
[31] Luna, L. 2009: "Yablo's paradox and beginningless time". Disputatio 3.
[32] Cook, R.T. 2009: "Curry, Yablo and Duality". Analysis 69.
[33] Urbaniak, R. 2009: "Leitgeb, "About," Yablo". Logique et Analyse 207. (PDF).
[34] Barrio, E. 2010: "Theories of truth without standard models and Yablo's sequences". Studia Logica 96.
[35] Picollo, L. 2011: "La paradojicidad de la paradoja de Yablo". University of Buenos Aires, Undergraduate dissertation (PDF in Spanish).

[to be updated!]

Friday, 8 April 2011

Wednesday, 6 April 2011

Monday, 4 April 2011

Good points (and important speakers)

Good points is the title of a conference to be held next week in Milan (April 11-12) in honor of Paolo Casalegno, logician and philosopher of language passed away in 2009. Invited speakers include Diego Marconi, Paul Boghossian, Timothy Williamson, Alex Orenstein, Igor Douven, and Crispin Wright (see the detailed program below). Of note, it will be possible to see the conference in streaming.

From the Obituary of Paolo Casalegno (D. Marconi, Dialectica, 2009, 63: 115-116):
"Paolo Casalegno was the finest Italian analytic philosopher. However, this is not the main reason we’ll miss him. We won’t just miss the clarity of his mind and his outstanding philosophical intelligence; we will miss his cooperative attitude, his friendliness, his ability to listen to the philosophies of others with an open mind, and perhaps most of all his often hidden but entirely genuine philosophical passion."

Good points: Paolo Casalegno's criticism of some analytic philosophers

University of Milan, Palazzo Greppi (via Sant'Antonio 12), Sala Napoleonica.

11 April 2011

15:00-16:30 xxxxDiego Marconi (Turin)
xxxxxxxxxxxxxxx Competence and normativity

17:00-19:00 xxxxPaul Boghossian (NYU)
xxxxxxxxxxxxxxx Reasoning and meaning
xxxxxxxxxxxxxxx with a reply by Timothy Williamson (Oxford)

12 April 2011

9:30-11:00 xxxxxAlex Orenstein (CUNY)
xxxxxxxxxxxxxxx Inscrutability scrutinized

11:30-13:00 xxxxIgor Douven (Groningen)
xxxxxxxxxxxxxxx Lotteries, assertion, and the pragmatics of belief

15:00-17:00 xxxxCrispin Wright (Aberdeen-NYU)
xxxxxxxxxxxxxxx The problem of non-conclusiveness

xxxxxxxxxxxxxxx with a reply by Timothy Williamson (Oxford)

(From the conference website.)