## Saturday, 31 March 2012

### Proving Things

Philosophy students working in areas that require technicalities often tell me that while they understand the various definitions and the general gist of some overall result or bunch of results (e.g., Lindenbaum Lemma, Overspill, etc.), they're no good at proving stuff. What I think plays a role is that it's not straightforward to prove things, and one can easily screw up a proof. Which is a bit embarrassing.
At a certain level, no one is any good at proving stuff unless they practice over and over and over again! Even the brilliant mathematics student reaches a point (often in mid-UG studies) where the weird innate talent that helped them sail through school runs dry, and they have to learn to think carefully and practice proof techniques they pick up from textbooks and teachers.
The post below about diagonalization has, tucked just beneath the surface, examples of things that aren't too hard to prove, and are useful for getting some practice at proofs by induction. For example,
Because of the presence of $\underline{0}$ and $S$ in $\mathcal{L}$, there is a canonical way of denoting numbers, using canonical numerals. Roughly, the number $n$ is denoted by the term $S\dots S\underline{0}$, with $S$ repeated $n$ times. More exactly, one defines a function
$\underline{ }:\mathbb{N} \rightarrow Tm(\mathcal{L})$,
by primitive recursion as follows:
$\underline{0}:= \underline{0}$
$\underline{n+1}:=S\underline{n}$
So, for example, the number 7 is denoted by $\underline{7}$, which is the term $SSSSSSS\underline{0}$. By induction, it is easy to prove that $(\underline{n})^{\mathbb{N}}=n$.
So, how do we prove this by induction?
Induction is used to prove claims of the form,
for all $n$, $n$ has property P,
by two preliminary steps:
(i) Base step: show that $0$ has P.
(ii) Induction step: show that if $k$ has P, then $k+1$ has P too (where $k$ is arbitrary).
It's always useful to identify the property in question. In this case,
$n$ has the property P iff $(\underline{n})^{\mathbb{N}}=n$
So, the two steps required are:
(i) Base step: show that $(\underline{0})^{\mathbb{N}}=0$.
(ii) Induction step: show that if $(\underline{k})^{\mathbb{N}}=k$, then $(\underline{k+1})^{\mathbb{N}}=k+1$.
The Base step is completely obvious, because $(\underline{0})^{\mathbb{N}}$ is defined to be the number $0$.
The induction step is fairly straightforward. We want to show that,
if $(\underline{k})^{\mathbb{N}}=k$, then $(\underline{k+1})^{\mathbb{N}}=k+1$.
So, first assume the Induction Hypothesis:
$(\underline{k})^{\mathbb{N}}=k$
The sub-aim is to prove,
$(\underline{k+1})^{\mathbb{N}}=k+1$
from the Induction Hypothesis. And this proof should involve the definition of the function $\underline{ }$ that takes numbers to their numerals, and perhaps further "background facts". So, one can do it like this,
$(\underline{k+1})^{\mathbb{N}} = (S\underline{k})^{\mathbb{N}}$
$= S^{\mathbb{N}}((\underline{k})^{\mathbb{N}})$
$= S^{\mathbb{N}}(k)$
$= k+1$, as required.
In the first line, we replace "$\underline{k+1}$" by its definition, namely "$S\underline{k}$". In the second line, we use the usual clause for evaluating the denotation of a compound term $ft$ relative to an interpretation $\mathcal{A}$, namely,
$(ft)^{\mathcal{A}} = f^{\mathcal{A}}((t)^{\mathcal{A}})$.
In the third line, we use the Induction Hypothesis; and in the final line, we use the fact that $S^{\mathbb{N}}$ is the successor function.

## Friday, 30 March 2012

### Professorship and postdocs at MCMP

One Assistant Professorship in Philosophy at the Munich Center for
Mathematical Philosophy (MCMP)

Two Postdoctoral Fellowships at the Munich Center for Mathematical
Philosophy (MCMP)

(Fluency in German is not mandatory.)
--------------------
Ludwig-Maximilians-University Munich is seeking applications for an

**Assistant Professorship in Mathematical Philosophy**

at the Munich Center for Mathematical Philosophy (directed by Professor
Hannes Leitgeb) at the Faculty for Philosophy, Philosophy of Science and
Study of Religion. The position, which is to start from October 1st 2012,
is for three years with the possibility of extension. Technically, it is a
so-called 'Akademische Ratsstelle auf Zeit' in the Bavarian university
system, which means basically that one has the rights and perks of a civil
servant.

The appointee will be expected (i) to do philosophical research assisted
by logical or mathematical methods, (ii) to teach five hours a week in
areas of philosophy in which logical or mathematical methods are applied,
and (iii) to take on management tasks in the new Munich Center for
Mathematical Philosophy. The successful candidate will have (iv) a PhD in
philosophy or logic, and (v) teaching experience in philosophy or logic.
Fluency in German is not mandatory.

For details on the Munich Center for Mathematical Philosophy, please see here.
The appointment will be made within the German A13 salary scheme (under the
assumption that the civil service requirements are met).

Women are currently underrepresented in the Faculty, therefore we
particularly welcome applications for this post from suitably qualified
female candidates. Furthermore, given equal qualification, severely
physically challenged individuals will be preferred.

Applications (including CV, certificates, list of publications) should be
sent to

Ludwig-Maximilians-Universitaet Muenchen
Fakultaet für Philosophie, Wissenschaftstheorie und Religionswissenschaft
Lehrstuhl für Logik und Sprachphilosophie / MCMP
Geschwister-Scholl-Platz 1
80539 Muenchen
Germany
E-Mail: office.leitgeb@lrz.uni-muenchen.de

by

April 23rd, 2012.

If possible at all, we very much prefer applications by email.

applicant’s qualifications for academic research should be sent to the
same address from the referees directly.

Contact for informal inquiries: office.leitgeb@lrz.uni-muenchen.de
--------------------
Two postdoctoral fellowships are being advertised at the Munich Center for
Mathematical Philosophy (MCMP). The MCMP, which is devoted to applications
of logical and mathematical methods in philosophy, was established in 2010
at the Ludwig-Maximilians-University Munich (LMU) based on generous
support by the Alexander von Humboldt Foundation. Directed by Professor
Hannes Leitgeb, the Center hosts a vibrant research community of
university faculty, postdoctoral fellows, doctoral fellows, and visiting
fellows. The Center organizes at least two weekly colloquia in
Mathematical Philosophy and a weekly internal a work-in-progress seminar,
as well as various other activities such as workshops, conferences,

The successful candidate will partake in all of the Center's academic
activities and enjoy its administrative facilities and financial support.
More information on the Center can be found here. 
Over and above the Center, the Faculty of Philosophy, Philosophy of
Science and Study of Religion at LMU offers a wealth of expertise in all
areas of philosophy.

The successful applicants are supposed to use logical or mathematical
methods in their philosophical research, and/or to reflect on these
methods philosophically. Research projects can, of course, be carried out
in logic or philosophy of mathematics, but they are not restricted to
these fields -- for instance, work in epistemology, philosophy of science,
philosophy of language, metaphysics, moral philosophy, and so forth are
equally encouraged, as long as formal methods play a significant role in
the corresponding research projects.

One of the two postdoctoral fellowships is meant to combine research on a
philosophical topic with research on the following question on academic
management: Which new ways of organizing the field of mathematical
philosophy, and of presenting mathematical philosophy to professional
philosophers, students, and the public, can be developed that might take
mathematical philosophy one step forward? If you intend to apply just for
package.

In general, applicants should be in possession of the skills that would
normally be gained by  achieving a PhD or equivalent in philosophy, logic
or a closely related field. The postdoctoral stipends are for three years,
and they should be taken up by October 1st 2012 (although there is some
flexibility on that side). Each stipend will amount to EUR 2400 of monthly
salary (normally tax-free, but excluding insurance). Additionally, the
Center helps its fellows with the costs that arise from attending
conferences (fees, traveling, accommodation).

The official language at the Center is English, and the successful
candidates need not be able to speak German. There is also the
possibility, though no obligation, to do some teaching in either English
or German.

Applications are due by April 23rd, 2012, and should include:
background and research interests.
2. A curriculum vitae.
3. A proposal for a research project (3 pages).
4. A sample of written work (e.g., a published article).
5. Two confidential letters of reference addressing the applicant’s
qualifications for postddoctoral research. These should be sent from the
referees directly.

We especially encourage female scholars to apply. The
Ludwig-Maximilians-University Munich in general, and the MCMP in
particular, endeavor to raise the percentage of women among its academic
personnel.

Applications, letters of reference, as well as any questions should be
directed to

Ludwig-Maximilians-Universitaet Muenchen
Fakultaet fuer Philosophie, Wissenschaftstheorie
und Religionswissenschaft
Lehrstuhl fuer Logik und Sprachphilosophie / MCMP
Geschwister-Scholl-Platz 1
D-80539 Muenchen
Germany

E-mail: office.leitgeb@lrz.uni-muenchen.de

If possible at all, we prefer to receive applications and letters of
reference by e-mail (sent to office.leitgeb@lrz.uni-muenchen.de).
***********************************


## Sunday, 25 March 2012

### A tribute to Horacio Arló-Costa in Buenos Aires

Eduardo Barrio, logician at the University of Buenos Aires, draws my attention to the event they are organizing over there to honor the much missed Horacio Arló-Costa (here is the website of the event, in Spanish, and here is the CFP in English at Choice & Inference). The event will take place in August 2 to 4 2012, and has as confirmed speakers (others are still listed as TBC):

Verónica Becher (Universidad de Buenos Aires)
John Collins (Columbia University)
Paul Egré (Jean Nicod Institute)
Jeff Helzner (Columbia University)
Rohit Parikh (City University of New York)
Fernando Tohmé (Universidad Nacional del Sur)

There is also a call for papers, focusing on (but not limited to) the themes and topics which were Horacio’s specialties:
We call for contributions in any area of philosophical logic, including (but not limited to) epistemic and modal logic, ampliative reasoning, belief revision, conditional logic, game theory and decision theory, among other topics. We invite submissions for 40-minute presentations. Submissions should take the form of a 1000/1500-word abstract. They should be sent by e-mail in an attached file in pdf format to info@sadaf.org.ar. Authors’ names and affiliation should be given only in the text of the e-mail message.
Let me add that Buenos Aires is truly one of the most charming cities in the world -- and I say this as a Brazilian, despite the classical rivalry between Argentina and Brazil... The combination of a terrific event with such an attractive location should be more than enough reasons for many readers to consider submitting a paper!

## Friday, 23 March 2012

### FEW 2012 programme announcement

From Branden Fitelson:
Dear Friends of FEW,

The Program for this year's workshop is now set.  See:

http://fitelson.org/few/schedule.html

As you can see, the program is terrific, and so is the venue!

http://en.wikipedia.org/wiki/Nymphenburg_Palace

If you're interested in attending, please contact the local organizers: 
Ole Hjortland <olethhjortland@gmail.com> and/or
Florian Steinberger <Florian.Steinberger@lmu.de>.
[Note: because of the swanky venue, there will be a limit on the number of attendees this year.]

We hope to see you in Munich!

Best,
Branden

## Wednesday, 21 March 2012

### Pullum S(e)usses Out the Halting Problem

Since 2012 is the Turing Centenary Year, and since incompleteness/diagonalization has been the dominant theme on M-Phi of late, I thought it would be both fun and appropriate to repost Geoff Pullum's delightful Seussification of Turing's proof of the undecidabilty of the halting problem. (H/t to John Baez over on Google+.)

Scooping the Loop Snooper

No program can say what another will do.
Now, I won’t just assert that, I’ll prove it to you:
I will prove that although you might work til you drop,
You can’t predict whether a program will stop.

Imagine we have a procedure called P
That will snoop in the source code of programs to see
There aren’t infinite loops that go round and around;
And P prints the word “Fine!” if no looping is found.

You feed in your code, and the input it needs,
And then P takes them both and it studies and reads
And computes whether things will all end as they should
(As opposed to going loopy the way that they could).

Well, the truth is that P cannot possibly be,
Because if you wrote it and gave it to me,
I could use it to set up a logical bind

Here’s the trick I would use — and it’s simple to do.
I’d define a procedure — we’ll name the thing Q –
That would take any program and call P (of course!)
To tell if it looped, by reading the source;

And if so, Q would simply print “Loop!” and then stop;
But if no, Q would go right back to the top,
And start off again, looping endlessly back,
Til the universe dies and is frozen and black.

And this program called Q wouldn’t stay on the shelf;
I would run it, and (fiendishly) feed it itself.
What behaviour results when I do this with Q?
When it reads its own source, just what will it do?

If P warns of loops, Q will print “Loop!” and quit;
Yet P is supposed to speak truly of it.
So if Q’s going to quit, then P should say, “Fine!” –
Which will make Q go back to its very first line!

No matter what P would have done, Q will scoop it:
Q uses P’s output to make P look stupid.
If P gets things right then it lies in its tooth;
And if it speaks falsely, it’s telling the truth!

I’ve created a paradox, neat as can be –
And simply by using your putative P.
When you assumed P you stepped into a snare;
Your assumptions have led you right into my lair.

So, how to escape from this logical mess?
I don’t have to tell you; I’m sure you can guess.
By reductio, there cannot possibly be
A procedure that acts like the mythical P.

You can never discover mechanical means
For predicting the acts of computing machines.
It’s something that cannot be done. So we users
Must find our own bugs; our computers are losers!

_______

Pullum, Geoffrey K. (2000) “Scooping the loop snooper: An elementary proof of the undecidability of the halting problem.” Mathematics Magazine 73.4 (October 2000), 319-320.

## Sunday, 18 March 2012

### Diagonalization and the Diagonal Lemma

A couple of recent M-Phi posts by Catarina have discussed the topic of diagonalization and the Diagonal Lemma in Gödel's incompleteness results.
Because I can't be bothered to get out of bed, this post consists in some Sunday afternoon rambling about how diagonalization is usually understood and how the Diagonal Lemma is proved (i.e., how to define a fixed-point, or self-referential, sentence).
Gödel's original 1931 paper on incompleteness, "On Formally Undecidable Propositions in Principia Mathematica and Related Systems, I", is brimming with new ideas, of which four important ones are:
(i) the notion of what is now called a primitive recursive function;
(ii) arithmetization of syntax (aka, gödel coding);
(iii) the diagonalization operation on a string;
(iv) given any $\mathcal{L}$-formula $\phi(x)$, the construction of a "fixed-point" or self-referential sentence $\theta$ equivalent to $\phi(\ulcorner \theta \urcorner)$.
There are short explanations of each of these below.

1. The (Usual) First-Order Language of Arithmetic

The language of arithmetic $\mathcal{L}$ is a first-order language with identity, and with signature $\{\underline{0},S,+, \times\}$ and is intended to talk about about the structure $\mathbb{N} = (\mathbb{N}, 0, S, +, \times)$. The set of $\mathcal{L}$-terms, $Tm(\mathcal{L})$, is defined inductively, by starting with $\underline{0}$ and all variables, $x_i$, and then closing under applying the function symbols $S, +, \times$. Then, an atomic $\mathcal{L}$-formula is defined to be of the form $t=u$, where $t, u \in Tm(\mathcal{L})$. And the set of $\mathcal{L}$-formulas, $Form(\mathcal{L})$, is defined inductively, by starting with atomic formulas $t = u$, and then closing under applying the unary connective $\neg$, the binary connective $\rightarrow$ and quantifiers $\forall x_i$. (Other connectives, and $\exists$, can be defined.)

It is useful sometimes to abuse notation and conflate the symbol $\underline{0}$ and its denotation, the number $0 \in \mathbb{N}$; or the symbol $S$ and its denotation, the function $S: \mathbb{N} \rightarrow \mathbb{N}$, etc. Context is usually sufficient to disambiguate. If one is considering a non-standard $\mathcal{L}$-interpretation $\mathcal{A}$ (i.e., $\mathcal{A}$ is not isomorphic to $\mathbb{N}$), then $S^{\mathcal{A}}$ will be the denotation of $S$ in $\mathcal{A}$; $+^{\mathcal{A}}$ the denotation of $+$, and so on.

Although taking $\underline{0},S, +$ and $\times$ as undefined primitives is fairly standard, some presentations include the binary relation symbol $x < y$ and a binary function symbol $exp(x,y)$ (denoting the exponential function, $x^y$) as additional primitives. But in fact both can be defined in $\mathbb{N}$: for $x < y$, this is obvious, but for $exp(x,y)$, it is not so obvious.

A bounded quantifier has the form $(\forall x \leq t)$ or $(\exists x \leq t)$, where $t$ is a term, and $x \leq t$ is the formula $\exists z(x + z = t)$. A bounded formula $\phi$ is a formula in which all quantifiers are bounded. A $\Sigma_1$-sentence has the form $\exists x_1 \dots \exists x_k \phi$, where $\phi$ is bounded.

2. Canonical Numerals for Numbers

Because of the presence of $\underline{0}$ and $S$ in $\mathcal{L}$, there is a canonical way of denoting numbers, using canonical numerals. Roughly, the number $n$ is denoted by the term $S...S\underline{0}$, with $S$ repeated $n$ times. More exactly, one defines a function
$\underline{}: \mathbb{N} \rightarrow Tm(\mathcal{L})$,
by primitive recursion as follows:
$\underline{0}: = \underline{0}$
$\underline{n+1} : = S\underline{n}$
So, for example, the number 7 is denoted by $\underline{7}$, which is the term $SSSSSSS\underline{0}$. By induction, it is easy to prove that $(\underline{n})^{\mathbb{N}} = n$.

3. Robinson Arithmetic: $Q$

A very important weak system of arithmetic is Robinson arithmetic, $Q$. Its axioms are:
$(Q_1) \forall x(Sx \neq \underline{0})$
$(Q_2) \forall x \forall y(Sx = Sy \rightarrow x = y)$
$(Q_3) \forall x(x \neq \underline{0} \rightarrow \exists y(x = Sy))$
$(Q_4) \forall x(x + \underline{0} = x)$
$(Q_5) \forall x \forall y(x + Sy = S(x+y))$
$(Q_6) \forall x(x \times \underline{0} = \underline{0})$
$(Q_7) \forall x \forall y(x \times Sy = x \times y + x)$
It is noteworthy that $Q$ does not have as an axiom scheme the induction scheme,
$(\phi(\underline{0}) \wedge \forall x(\phi(x) \rightarrow \phi(Sx))) \rightarrow \forall x \phi(x)$
Extending $Q$ with the induction scheme gives Peano arithmetic.
The system $Q$ is sound and complete for certain classes of sentences. That is,
$Q \vdash \phi$ if and only if $\phi$ is true.
This holds when $\phi$ is a quantifier-free sentence, a bounded sentence or a $\Sigma_1$-sentence.

In general, $Q$ can correctly perform finite computations using successor, addition and multiplication. However, $Q$ is not very good at algebra. For example, $Q$ does not prove $\forall x \forall y(x + y = y + x)$. Importantly, although $Q$ has axioms for addition and multiplication, it does not have axioms for exponentiation.

3. Gödel Coding

Under gödel coding, each primitive symbol $\alpha$ of the alphabet of $\mathcal{L}$ is given a code number, $c(\alpha)$. Also, each finite sequence $(n_1, \dots, n_k)$ of numbers can be coded as a number, sometimes written as $\langle n_1, \dots, n_k \rangle$. For example, one might use the coding
$\langle n_1, \dots, n_k \rangle := (p_1)^{n_1 + 1}\times \dots \times (p_k)^{n_{k} + 1}$,
where $p_1, p_2, \dots$ are the primes in ascending order, beginning with 2. (And we assign $0$ as the code of the empty string $()$.) For example, $\langle 10, 0, 1 \rangle = 2^{11}\times 3 \times 5^2$. Such numbers are then called sequence numbers.

Because $\mathcal{L}$-strings are sequences of primitive symbols, the code of an $\mathcal{L}$-string $(\alpha_1, \dots, \alpha_k)$ is defined as:
$\#(\alpha_1, \dots, \alpha_k):= \langle c(\alpha_1), \dots, c(\alpha_k) \rangle$.
(One needs to be a bit careful as the code of a primitive symbol, say $\forall$, is not the same as the code of the corresponding one-element string $(\forall)$. For example, if $c(\forall) = 10$, say, then $\#(\forall) = 2^{11}$.)

Because each $\mathcal{L}$-string $\phi$ now has a code, say $n = \#\phi$, we can think of the numeral $\underline{n}$ as a kind of name of $\phi$. So, if $n = \# \phi$, then the numeral $\underline{n}$ is written: $\ulcorner \phi \urcorner$, and is sometimes called the gödel quotation name of $\phi$.

So, using coding, the interpreted language of arithmetic is able to talk (indirectly) about the syntax of $\mathcal{L}$. For each syntactical entity $\phi \in \mathcal{L}$ has a kind of name $\ulcorner \phi \urcorner$ in $\mathcal{L}$ itself. Furthermore, special sets of (and relations amongst) syntactical entities become sets of numbers under coding. If $A$ is a set of syntactical entities, then $\#(A)$ is $\{\#(\alpha) : \alpha \in A \}$, the corresponding set of numbers. For example, the variables of $\mathcal{L}$ are coded as a set of numbers; the set of terms is coded as a set of numbers. Etc. Then one can ask whether these sets (or relations) are "expressible" in the language of arithmetic.

In general, these syntactical sets and relations come out, under coding, as either primitive recursive, or recursive, or recursively enumerable.

4. Primitive Recursive Functions

The primitive recursive functions are nowadays defined a bit differently from Gödel's original approach. (See this SEP article for more details.) First,
The basic functions:
(i) the zero function $Z : \mathbb{N} \rightarrow \mathbb{N}$, by $Z(n) = 0$.
(ii) the successor function $S : \mathbb{N} \rightarrow \mathbb{N}$, by $S(n) = n+1$.
(iii) the projection functions $P^k_j : \mathbb{N}^k \rightarrow \mathbb{N}$ (with $j \leq k$), by $P^k_j(n_1, \dots, n_k) = n_j$.
One may then build up more functions by either composition, or by primitive recursion. That is, given functions $g : \mathbb{N}^k \rightarrow \mathbb{N}$ and $h : \mathbb{N}^{k+2} \rightarrow \mathbb{N}$, we say that $f : \mathbb{N}^{k+1} \rightarrow \mathbb{N}$ is obtained by primitive recursion from $g$ and $h$ just if, for all $x, y_1, \dots, y_k \in \mathbb{N}$,
$f(y_1, \dots, y_k, 0) = g(y_1, \dots, y_k)$
$f(y_1, \dots, y_k, x+1) = h(x, y_1, \dots, y_k, f(y_1, \dots, y_k, x))$
The primitive recursive functions are then defined to be those functions obtainable from the basic functions by composition and primitive recursion.

There is a further operation on functions---minimization. Closing the primitive recursive functions under that operation then yields the recursive functions.

The notion of recursiveness can be extended to subsets of, and relations on, $\mathbb{N}$ by considering the characteristic function of the set or relation. Suppose $A \subseteq \mathbb{N}$. The characteristic function $\chi_A$ is defined by:
$\chi_A(n) = 1$ if $n \in A$, and 0 otherwise.
If $\chi_A$ is (primitive) recursive, we say that $A$ is (primitive) recursive.
For a relation $R \subseteq \mathbb{N}^k$, the definitions are analogous.

5. Representability in $Q$

There are two main notions of "expressing" a numerical function or relation. The first is semantical: given some set $A \subseteq \mathbb{N}$, simply whether there is an $\mathcal{L}$-formula $\phi(x)$ such that
$A = \{n : \mathbb{N} \models \phi(\underline{n})\}$.
If so, we say that $\phi(x)$ defines $A$.

The other notion is syntactical, and always involves a theory $T$ in $\mathcal{L}$. Given a function $f : \mathbb{N}^k \rightarrow \mathbb{N}$, we say $f$ is representable in $T$ just when there is an $\mathcal{L}$-formula $\phi(x_1, \dots, x_k, y)$ such that, for all $n_1, \dots, n_k$,
$T \vdash \forall y(y = \underline{f(n_1, \dots, n_k)} \leftrightarrow \phi(\underline{n_1}, \dots, \underline{n_k}, y))$
Given a theory $T$, a similar notion can be applied to subsets, and relations on, $\mathbb{N}$. Suppose $A \subseteq \mathbb{N}$. We say that $A$ is strongly representable in $T$ just when there is some formula $\phi(x)$, such that for all $n \in \mathbb{N}$,
(i) if $n \in A$ then $T \vdash \phi(\underline{n})$
(ii) if $n \notin A$, then $T \vdash \neg \phi(\underline{n})$
We say that $A$ is weakly representable in $T$ just when condition (i) holds, but with "if ... then" replaced by "... iff ..." (thanks to Kai Wehmeier for the correction).
An analogous definition can be given for a relation $R \subseteq \mathbb{N}^k$ to be strongly or weakly representable.

The most important representation property for $Q$ is:
Representability in $Q$
All recursive functions are representable in $Q$.
To prove this, one shows first that each of the basic functions is representable in $Q$; and then one shows that the representable functions are closed under composition, primitive recursion and minimization. Details of how to do this are given in standard textbooks, such as H. Enderton, A Mathematical Introduction to Logic. As Aldo Antonelli notes in the comments below, in order to show that functions obtained by primitive recursion are representable, one needs to show how sequence coding and decoding can be expressed in $\mathcal{L}$, which only has $S, +$ and $\times$ as primitives. The crucial result required is Gödel's Beta Function Lemma, using the Chinese Remainder Theorem. These results need only to be proved in the metatheory, not in $Q$ itself (which lacks induction).
The converse of the representability theorem is true as well: if $f$ is representable in $Q$, then $f$ is recursive.

A corollary of the representation theorem for $Q$ is that all recursive sets and relations are strongly representable in $Q$. So, if $R \subseteq \mathbb{N}^k$ is recursive, there is a formula $\phi(x_1, \dots, x_k)$ such that, for all $n_1, \dots, n_k \in \mathbb{N}$,
(i) if $Rn_1\dots n_k$, then $Q \vdash \phi(\underline{n_1}, \dots, \underline{n_k})$.
(ii) if $\neg Rn_1 \dots n_k$, then $Q \vdash \neg \phi(\underline{n_1}, \dots, \underline{n_k})$.
As it turns out, for a standard axiomatic theory $T$, the set of (codes of) axioms is recursive (for example, a computer program can be written to decide of a given input string whether it is an axiom or not), indeed, primitive recursive. Furthermore, the proof-in-$T$ relation, "$d$ is a proof of $\phi$ (in $T$)", is primitive recursive. So, there are $\mathcal{L}$-formulas which strongly represent (in $Q$) such relations. In particular, there is a formula $Prf_T(y,x)$ which strongly represents the proof-in-$T$ relation: $y$ is the code of the proof-in-$T$ of formula with code $x$. Then the formula $\exists y Prf_T(y,x)$ expresses "$x$ is a theorem of $T$" and is written $Prov_T(x)$.

6. Diagonalization

Consider the binary syntactic operation on strings,
the result of substituting the quotation of string $A$ for all free occurrences of a variable in string $B$.
Although this is binary, obviously, one can take this operation on the same argument, i.e.,
the result of substituting the quotation of string $A$ for all free occurrences of a variable in string $A$.
Informally, this is (syntactic) diagonalization.
For example, the diagonalization of
$x$ contains 23 symbols
is
"$x$ contains 23 symbols" contains 23 symbols
If $\phi$ is a formula of the language of arithmetic, with at most the variable $x$ free, then its diagonalization is defined to be:
$\phi(x/\ulcorner \phi \urcorner)$
I.e., the result of substituting the gödel quotation $\ulcorner \phi \urcorner$ for all free occurrence of $x$ in $\phi$.
An alternative, but equivalent (in predicate logic) is:
$\exists x(x = \ulcorner \phi \urcorner \wedge \phi)$
Clearly, this syntactic operation of diagonalization corresponds to a numerical function, $d : \mathbb{N} \rightarrow \mathbb{N}$, given by, for any $\mathcal{L}$-string $\phi$,
$d(\#\phi) = \#(\phi(x/\ulcorner \phi \urcorner))$ if $\phi$ is an $\mathcal{L}$-formula with at most $x$ free;
$d(n) = 0$ if $n$ is not the code of such an $\mathcal{L}$-formula
One can show (the details are tedious) that $d$ is primitive recursive.
Because all recursive functions are representable in $Q$, it follows that there is an $\mathcal{L}$-formula $Diag(x,y)$ such that, for all $n \in \mathbb{N}$,
$Q \vdash \forall y(y = \underline{d(n)} \leftrightarrow Diag(\underline{n}, y))$

7. The Diagonal (or Fixed Point) Lemma

Let $\phi(x)$ be an $\mathcal{L}$-formula with at most $x$ free. (If $\phi$ happens to have only one variables free, say $y$, but not $x$, then one can do a relabelling of variables to ensure that $\phi$ has only $x$ free.) We want to find a formula $\theta$ that is in some sense equivalent to $\phi(\ulcorner \theta \urcorner)$.
First consider informally, the expression,
the diagonalization of $x$ is $\phi$
If one diagonalizes this, one gets,
the diagonalization of "the diagonalization of $x$ is $\phi$" is $\phi$
Call this statement $\theta$. Now $\theta$ says that a certain thing is $\phi$. This thing is the diagonalization of "the diagonalization of $x$ is $\phi$"---but this is precisely the statement $\theta$ itself. Such a statement $\theta$ is called a fixed-point sentence for the original formula $\phi(x)$, and it says of itself that it is $\phi$.

So, to get this into the formalized language $\mathcal{L}$, we begin with some formula $\phi$ and we consider,
$\exists y(Diag(x, y) \wedge \phi(y))$
Call this formula $\psi(x)$. Let $\theta$ be the diagonalization of $\psi$. That is, $\theta$ is $\psi(\ulcorner \psi \urcorner)$. So, trivially, we have,
$\vdash \theta \leftrightarrow \psi(\ulcorner \psi \urcorner)$
In particular, $\#(\theta) = d(\# (\psi))$. So, the term $\ulcorner \theta \urcorner$ is the numeral $\underline{d(\# (\psi))}$. And thus, trivially, we have
$\vdash \ulcorner \theta \urcorner = \underline{d(\# (\psi))}$
Furthermore, since $Q$ represents $d$, we have,
$Q \vdash \forall y (y = \ulcorner \theta \urcorner \leftrightarrow Diag(\ulcorner \psi \urcorner, y))$
Let us now reason about $Q$:
$Q \vdash \theta \leftrightarrow \psi(\ulcorner \psi \urcorner)$
$Q \vdash \theta \leftrightarrow \exists y(Diag(\ulcorner \psi \urcorner, y) \wedge \phi(y))$
$Q \vdash \theta \leftrightarrow \exists y(y = \ulcorner \theta \urcorner \wedge \phi(y))$
$Q \vdash \theta \leftrightarrow \phi(\ulcorner \theta \urcorner)$
This establishes the
Diagonal Lemma
Given an $\mathcal{L}$-formula $\phi(x)$, there is an $\mathcal{L}$-sentence $\theta$ such that
$Q \vdash \theta \leftrightarrow \phi(\ulcorner \theta \urcorner)$

8. Diagonal Lemma using a Function Term

For the Diagonal Lemma, one needs that the external syntactic operation of diagonalization on a string be represented in the theory. Under coding, the diagonalization operation corresponds to a primitive recursive function, $d : \mathbb{N} \rightarrow \mathbb{N}$, the diagonal function.

$Q$ has the property that all recursive functions are representable --- which means that for any recursive function $f : \mathbb{N}^k \rightarrow \mathbb{N}$, there is a formula $\phi_f(x_1, \dots, x_k, y)$ which represents it. Note that the choice of such a formula is not unique. But $Q$ is weak, and it is not always true that $Q$ proves that this formula is functional. In particular, $Q$ may not prove $\forall x_1 \dots \forall x_k \exists y \phi_f(x_1, \dots, x_k, y)$. For example, $Q$ cannot prove the totality of exponentation.

However, if one extends $Q$ with the induction scheme, then this situation changes. $PA$ is strong enough to prove the totality of any primitive recursive function. In particular, one can choose the formula $Diag(x,y)$ representing the diagonal function $d$ such that,
$PA \vdash \forall x \exists y Diag(x, y)$
When this is so, one can introduce, by explicit definition, a new function symbol $diag(x)$ as follows:
$y = diag(x): = Diag(x,y)$
Let $PA^+$ be the result of adding this definition to $PA$. By the representablity of $d$, we have,
(*) $PA^+ \vdash diag(\ulcorner \phi \urcorner) = \ulcorner \phi(\ulcorner \phi \urcorner) \urcorner$
Returning to the Diagonal Lemma, suppose that one wishes to find the fixed-point for a given formula $\phi$. As noted, one needs an $\mathcal{L}$-formula expressing,
the diagonalization of $x$ is $\phi$
which one then diagonalizes to obtain $\theta$.
Above, "the diagonalization of $x$ is $\phi$" was expressed with
$\exists y(Diag(x, y) \wedge \phi(y))$
But with a function term $diag(x)$, one can use instead
$\phi(diag(x))$
One may then diagonalize this, to obtain a fixed point sentence $\theta$
$\theta: = \phi(diag(\ulcorner \phi(diag(x)) \urcorner))$
Note that $\theta$ has the form $\phi(t)$, where $t$ is the term $diag(\ulcorner \phi(diag(x)) \urcorner)$. To show that this works, note that we have from (*),
$PA^+ \vdash diag(\ulcorner \phi(diag(x)) \urcorner) = \ulcorner \phi(diag(\ulcorner \phi(diag(x) \urcorner) \urcorner$
and so,
$PA^+ \vdash t = \ulcorner \phi(t) \urcorner$
So, $\phi(t)$ is a self-referential sentence containing a term $t$ provably (in $PA$) denoting the code of $\phi(t)$. And we have a modified version of the Diagonal Lemma,
Diagonal Lemma using a Term $diag(x)$
Let $PA^+$ be $PA$ extended with an explicit definition of a function term $diag(x)$. Let $\phi(x)$ be an $\mathcal{L}$-formula. Then there is a term $t$ such that,
$PA^+ \vdash t = \ulcorner \phi(t) \urcorner$

9. Gödel Sentence

For an arithmetic theory $T$ to which Gödel's incompleteness theorem applies, a gödel sentence $G$ is usually defined to be a fixed-point of the formula $\neg Prov_{T}(x)$ where $Prov_{T}(x)$ is a formula expressing "$x$ is a theorem of $T$". That is,
$Q \vdash G \leftrightarrow \neg Prov_{T}(\ulcorner G \urcorner)$
The construction above shows that, if we use $PA$ instead of $Q$ as our underlying syntax theory, $G$ can be expressed rather vividly as follows:
$G: = \neg Prov_{T}(diag(\ulcorner \neg Prov_{T}(diag(x))\urcorner))$
So, $G$ has the form $\neg Prov_{T}(t)$, where $t$ denotes the code of $G$.

## Saturday, 17 March 2012

### Judea Pearl wins Turing Award (the "Nobel Prize in Computing")

"For fundamental contributions to artificial intelligence through the development of a calculus for probabilistic and causal reasoning."

See here and here.

## Wednesday, 7 March 2012

### Diagonalization and Différance: a mismatch of scope

(This post is part of the NewAPPS symposium on Paul Livingston's 'Derrida and Formal Logic: Formalizing the Undecidable'. Its intended audience is rather different from the M-Phi readership, but I hope it will be of interest anyway.)

Paul Livingston’s paper presents a comparative analysis of Gödel’s incompleteness results, Priest on diagonalization, and Derrida on différance. One of the goals seems to be to show that there are significant analogies between these different concepts, which would be at odds with the fact that Derrida’s ideas encounter great hostility among most philosophers in the analytic tradition. In particular, Derrida’s différance and the concept/technique of diagonalization are presented as being each other’s counterpart (a view earlier defended by Graham Priest in Beyond the Limits of Thought).

But crucially, while différance is presented as cropping up in any discourse whatsoever, for a particular language/formal system to have the kind of imcompleteness identified by Gödel specifically with respect to arithmetic, certain conditions must hold of the system. So a fundamental disanalogy between what could be described as the ‘Gödel phenomenon’ (incompleteness arising from diagonalization and the formulation of a so-called Gödel sentence) and Derrida’s différance concerns the scope of each of them: the latter is presented as a perfectly general phenomenon, while the former is provably restricted to a specific (albeit significant) class of languages/formal systems. Although Livingston does not fail to mention that a system must have certain expressive characteristics for the Gödel phenomenon to emerge, it seems to me that he downplays this aspect in order to establish the comparison between différance and diagonalization. (There is much more that I could say on Livingstone’s thought-provoking piece, but for reasons of space I will focus on this particular aspect.)

First, let me present one possible formulation of Cantor’s original diagonal argument – notice that it is not Cantor’s own original formulation, but this is the formulation that I use to explain diagonalization to students (so far successfully). The goal is to establish that there are more real numbers than natural numbers, even though both sets are infinite (leading to the famous conclusion that infinity comes in different sizes). In fact, it is not necessary to consider all real numbers; even in the interval between 0 and 1 there are more real numbers than there are natural numbers. This is how the argument goes: assume that you produce an enumeration of each possible (infinite) decimal expansion between 0 and 1, for example:

1 0,38957…

2 0,74581…

3 0,64633…

4 0,99432…

5 0,72590

Now take the first number of the first item on the list; the second of the second item etc. (the ones indicated with x above), and add 1 to each of them. One then produces the following number:

0,45741…

This number is provably not on the original list, because it differs in at least one position from each of the items already on the list (this is what the operation of adding 1 uniformly is meant to accomplish). And yet, by the hypothesis it should have been on the list, because it is a decimal expansion between 0 and 1: it both ‘is and is not’ on the list. So the original hypothesis, namely that the list contained the totality of decimal expansions between 0 and 1, is proved to be false, which then allows for the conclusion that there cannot be an enumeration of this totality, and thus no surjective mapping from the natural numbers to the reals between 0 and 1. (Adding the newly produced number to the list does not solve the problem, as the same procedure can be iterated again and again, always producing items not on the list.)

This is how a typical diagonalization argument illustrates the paradoxical interplay between Closure and Transcendence, in Priest’s terminology, which Livingston discusses at length in the paper. (Dennis des Chene points out to me in correspondence that a diagonal argument need not be formulated as a reductio argument, which is its usual formulation; if formulated as a direct, constructive argument, then the Closure-Transcendence interplay is not nearly as explicit. This observation has interesting philosophical implications, but I’ll leave them for another occasion.)

More technically, the diagonalization of an expression is now usually understood as the result of concatenating its quotation name with itself: for example, the diagonalization of x = 0 (using ‘ ’ to generate the name of the expression) is

x = 0’ = 0

(i.e. the x in the original expression has been replaced by the name of the expression itself – I borrow this example from Jeff Ketland). Diagonalization is a key component of Gödel’s incompleteness theorems, and Haim Gaifman has a very nice paper showing how the results arise naturally from the diagonalization method. (Note: from now on, my observations will rely heavily on the insightful comments to a post I wrote on the topic at M-Phi, in preparation for this post. Thanks again to those who have chipped in there!)

Gödel’s original argument was intended to show that arithmetic is incomplete, as the Gödel sentence ‘This sentence is not provable’ (construed by diagonalization) is not provable if true (if false, then the system proves a false sentence, and thus is unsound, and that would be much worse than being incomplete!). This means that any language/formal system containing arithmetic will give rise to the Gödel phenomenon, and thus that containing arithmetic is a sufficient condition for the Gödel kind of incompleteness to emerge.

But there is a significant body of research (dating back to Smullyan, but more recently in work by Andrzej Grzegorczyc and Albert Visser) showing that other systems which are not obviously ‘arithmetical’ are prone to the same kind of Gödel-like incompleteness, in particular theories of syntax and theories of computation. In other words, containing arithmetic (in a particular sense of ‘containing’ at least) does not seem to be a necessary condition for the Gödel phenomenon to arise. As well put by Jeff Ketland (in comments here), “arithmetic, syntax and computation are all heavily intertwined. It's what they have in common which is at the heart of the phenomenon.” These investigations provide elements for the investigation of the technical issue of the necessary conditions in a formal system for it to be Gödel-incomplete.

Now back to Livingston’s text. Derrida’s concept of différance is presented as absolutely general, as emerging in any form of discourse. Now, it is clear that the Gödel phenomenon does not pertain only to arithmetic and related systems, but it is also clear that it does not affect any arbitrary language/formal system. Certain conditions, especially related to expressivity, must be met for diagonalization to be possible. Needless to say, there are many significant systems that are perfectly complete, first-order predicate logic for example (whose completeness was proved by Gödel himself, prior to the incompleteness results).

So while there are interesting conceptual and structural similarities, the fact that différance and diagonalization/incompleteness have different scopes seems to me to suggest that the analogy is not as smooth as Livingston seems to present it to be. Whether a given language gives rise to the Gödel phenomenon or not is a technical, non-trivial question, thus not warranting the kind of generalization that Livingston seems to be proposing.

A final general remark: even if it is ‘writing itself’ that arguably gives rise to the phenomenon of diagonalization, it is also writing and fomalization itself which allow for the very investigation of the limits of formalization. The technique creates a limitation, so to speak, but it also has the resources to investigate the very limitation, and that’s pretty impressive.

## Thursday, 1 March 2012

### Is arithmetic a necessary condition for Gödel incompleteness?

This is really a question for Jeff, but I hope others will be interested as well. Here is the thing: next week NewAPPS will be hosting a symposium on a text by Paul Livingstone which presents a comparative analysis of Gödel’s incompleteness theorems, and Graham Priest and Derrida (yes indeed!) on diagonalization. It is an interesting text, even though some of the more metaphysical claims seem a bit over-the-top to me (as I will argue in my contribution to the symposium). But anyway, so I’ve been thinking about the whole Gödel thing again and how wide-ranging the conclusions drawn by Livingstone really are, and this got me thinking about the conditions that a system must satisfy for the Gödel argument to go through. It is clear that containing arithmetic is a sufficient condition for the argument to go through, as arithmetic allows for the encoding which then gives rise to the Gödel sentence.

But my question now is: is containing arithmetic a necessary condition in a system for the Gödel argument to go through? It seems to me that the answer should be negative. In fact, I recall that many years ago I heard Haim Gaifman saying that any other suitable encoding technique would be enough for the argument to run; arithmetization is just a particularly convenient encoding method. So my first question is whether this is indeed correct, i.e. that containing arithmetic is not a necessary condition for a Gödel incompleteness argument to take off. The second question is whether there are interesting examples of systems that can be proved to be incomplete by a Gödel argument even though they do not ‘contain arithmetic’ (I have the feeling that even the concept of ‘containing arithmetic’ might need to be clarified).

Thoughts, anyone?