Wednesday, 7 March 2012

Diagonalization and Différance: a mismatch of scope

(This post is part of the NewAPPS symposium on Paul Livingston's 'Derrida and Formal Logic: Formalizing the Undecidable'. Its intended audience is rather different from the M-Phi readership, but I hope it will be of interest anyway.)

Paul Livingston’s paper presents a comparative analysis of Gödel’s incompleteness results, Priest on diagonalization, and Derrida on différance. One of the goals seems to be to show that there are significant analogies between these different concepts, which would be at odds with the fact that Derrida’s ideas encounter great hostility among most philosophers in the analytic tradition. In particular, Derrida’s différance and the concept/technique of diagonalization are presented as being each other’s counterpart (a view earlier defended by Graham Priest in Beyond the Limits of Thought).

But crucially, while différance is presented as cropping up in any discourse whatsoever, for a particular language/formal system to have the kind of imcompleteness identified by Gödel specifically with respect to arithmetic, certain conditions must hold of the system. So a fundamental disanalogy between what could be described as the ‘Gödel phenomenon’ (incompleteness arising from diagonalization and the formulation of a so-called Gödel sentence) and Derrida’s différance concerns the scope of each of them: the latter is presented as a perfectly general phenomenon, while the former is provably restricted to a specific (albeit significant) class of languages/formal systems. Although Livingston does not fail to mention that a system must have certain expressive characteristics for the Gödel phenomenon to emerge, it seems to me that he downplays this aspect in order to establish the comparison between différance and diagonalization. (There is much more that I could say on Livingstone’s thought-provoking piece, but for reasons of space I will focus on this particular aspect.)

First, let me present one possible formulation of Cantor’s original diagonal argument – notice that it is not Cantor’s own original formulation, but this is the formulation that I use to explain diagonalization to students (so far successfully). The goal is to establish that there are more real numbers than natural numbers, even though both sets are infinite (leading to the famous conclusion that infinity comes in different sizes). In fact, it is not necessary to consider all real numbers; even in the interval between 0 and 1 there are more real numbers than there are natural numbers. This is how the argument goes: assume that you produce an enumeration of each possible (infinite) decimal expansion between 0 and 1, for example:

1 0,38957…

2 0,74581…

3 0,64633…

4 0,99432…

5 0,72590

Now take the first number of the first item on the list; the second of the second item etc. (the ones indicated with x above), and add 1 to each of them. One then produces the following number:

0,45741…

This number is provably not on the original list, because it differs in at least one position from each of the items already on the list (this is what the operation of adding 1 uniformly is meant to accomplish). And yet, by the hypothesis it should have been on the list, because it is a decimal expansion between 0 and 1: it both ‘is and is not’ on the list. So the original hypothesis, namely that the list contained the totality of decimal expansions between 0 and 1, is proved to be false, which then allows for the conclusion that there cannot be an enumeration of this totality, and thus no surjective mapping from the natural numbers to the reals between 0 and 1. (Adding the newly produced number to the list does not solve the problem, as the same procedure can be iterated again and again, always producing items not on the list.)

This is how a typical diagonalization argument illustrates the paradoxical interplay between Closure and Transcendence, in Priest’s terminology, which Livingston discusses at length in the paper. (Dennis des Chene points out to me in correspondence that a diagonal argument need not be formulated as a reductio argument, which is its usual formulation; if formulated as a direct, constructive argument, then the Closure-Transcendence interplay is not nearly as explicit. This observation has interesting philosophical implications, but I’ll leave them for another occasion.)

More technically, the diagonalization of an expression is now usually understood as the result of concatenating its quotation name with itself: for example, the diagonalization of x = 0 (using ‘ ’ to generate the name of the expression) is

x = 0’ = 0

(i.e. the x in the original expression has been replaced by the name of the expression itself – I borrow this example from Jeff Ketland). Diagonalization is a key component of Gödel’s incompleteness theorems, and Haim Gaifman has a very nice paper showing how the results arise naturally from the diagonalization method. (Note: from now on, my observations will rely heavily on the insightful comments to a post I wrote on the topic at M-Phi, in preparation for this post. Thanks again to those who have chipped in there!)

Gödel’s original argument was intended to show that arithmetic is incomplete, as the Gödel sentence ‘This sentence is not provable’ (construed by diagonalization) is not provable if true (if false, then the system proves a false sentence, and thus is unsound, and that would be much worse than being incomplete!). This means that any language/formal system containing arithmetic will give rise to the Gödel phenomenon, and thus that containing arithmetic is a sufficient condition for the Gödel kind of incompleteness to emerge.

But there is a significant body of research (dating back to Smullyan, but more recently in work by Andrzej Grzegorczyc and Albert Visser) showing that other systems which are not obviously ‘arithmetical’ are prone to the same kind of Gödel-like incompleteness, in particular theories of syntax and theories of computation. In other words, containing arithmetic (in a particular sense of ‘containing’ at least) does not seem to be a necessary condition for the Gödel phenomenon to arise. As well put by Jeff Ketland (in comments here), “arithmetic, syntax and computation are all heavily intertwined. It's what they have in common which is at the heart of the phenomenon.” These investigations provide elements for the investigation of the technical issue of the necessary conditions in a formal system for it to be Gödel-incomplete.

Now back to Livingston’s text. Derrida’s concept of différance is presented as absolutely general, as emerging in any form of discourse. Now, it is clear that the Gödel phenomenon does not pertain only to arithmetic and related systems, but it is also clear that it does not affect any arbitrary language/formal system. Certain conditions, especially related to expressivity, must be met for diagonalization to be possible. Needless to say, there are many significant systems that are perfectly complete, first-order predicate logic for example (whose completeness was proved by Gödel himself, prior to the incompleteness results).

So while there are interesting conceptual and structural similarities, the fact that différance and diagonalization/incompleteness have different scopes seems to me to suggest that the analogy is not as smooth as Livingston seems to present it to be. Whether a given language gives rise to the Gödel phenomenon or not is a technical, non-trivial question, thus not warranting the kind of generalization that Livingston seems to be proposing.

A final general remark: even if it is ‘writing itself’ that arguably gives rise to the phenomenon of diagonalization, it is also writing and fomalization itself which allow for the very investigation of the limits of formalization. The technique creates a limitation, so to speak, but it also has the resources to investigate the very limitation, and that’s pretty impressive.

33 comments:

  1. In comments here, a discussion on whether the Livingston text does or does not betray a misunderstanding of Godel's theorems:

    http://www.newappsblog.com/2012/03/new-apps-symposium-on-paul-livingston-derrida-and-formal-logic.html

    ReplyDelete
  2. Catarina: The way Godel proved negation-completeness via a type of diagonalisation (which does bear some structural analogies to the usual diagonal argument for the uncountability of the reals) is rather different from more recent versions of the proof using the fixed point 'diagonalisation lemma'. Peter Milne has a super piece on all this 'On Godel Sentences and What They Say' Philosophia Mathematica 2007 and there has been some discussion in Peter Smith's Logic matters blog about where the fixed point technique came from- Carnap is often cited but this was contested in that blog discussion.

    Is the use of quotation names- literally using quotation- really how diagonalisation is now usually understood? Isn't that just an informal way of getting the idea across? For example, if one proceeds via the fixed point lemma in arithmetic then diagonalisation with respect to a predicate Phi(x) in arithmetised syntax usuallly concerns the provability (or truth in all models) of biconditionals of the form P <-> Phi(SSS....0)

    where the string of n S's then 0 is the canonical numeral for the number n, this being the code of P.

    I agree ('completely'!) that completeness as in 1st order logic is complete is nothing to do with Godelian incompleteness, it's amazing how many confusing and conflicting differences (if not différances)there are among logicians on terminology: consistency as syntactic versus semantic,validity as solely applying to arguments versus logical truths as validities and so on.

    Godel's 1930s proof showed that the converse of the soundness theorem held too; sometimes this is called adequacy: if P is true in a model when all of X are then X proves P. This is nothing to do with negation-incompleteness: that a given set of sentences (for example the closure of some axioms under a given derivability relation) contains neither P nor ¬P, for some P. Or, focusing on derivability, that neither P nor ¬P are provable in the theory in question. As you say, there are (in abundance) negation-incomplete theories in 'converse sound' first-order logic, propositional logic and so on. And negation-complete theories in not converse sound logics such as full 2nd order logic, most brutally, take the set of all truths of 2nd order arithmetic as the (non-recursively axiomatisable)theory.

    Jeff would be able to come up with better examples of theories which prove P or ¬P for every P even when the logic is inadequate, not converse sound. Maybe arithmetic in the language with only +, S and 0 is an example. It's negation-complete classically I suspect, but am too lazy to work this out, that it is also intuitionistically though that logic is inadequate w.r.t. the usual classical semantics.

    Best Alan

    ReplyDelete
  3. sorry I meant to write 'it's negation complete classically; I suspect ...'

    not

    'it's negation complete classically I suspect ...'

    typical sloppy amateur logician! Alan

    ReplyDelete
  4. In fact (this will almost certainly be my last foray tonight on this, as I'm heading for drink to sustain me through watching the football highlights!), in fact quotation names aren't, when one thinks about this, a way to 'diagonalise', they can't be because they contain what they name ('can't', given well-foundedness of containing). Thus in

    ‘x = 0’ = 0

    the singular term "‘x = 0’' doesn't refer (and couldn't possibly refer), to the sentence it is part of; the sentence it is part of is not 'x = 0'. (It does refer to the open sentence from which the sentence it is part of is constructed and that is pretty similar, it is true, to the method Godel uses in his substitution function.)

    That's why something like Godel's substitution function, or the fixed point lemma (or more informally demonstratives or empirical descriptions- 'the sentence on the board in room blah blah is untrue' etc.) is needed. Alan

    ReplyDelete
  5. Hi Alan, if $\phi$ is a formula with possibly $x$ free, then on the usual approach, the diagonalization of $\phi$ (with respect to $x$) is $\phi(x/\ulcorner \phi \urcorner)$. I.e., it is the result of substituting the name of $\phi$ for all free occurrences of $x$ in $\phi$: Alternatively, it can be defined as $\exists x(x = \ulcorner \phi \urcorner \wedge \phi)$.
    (See a bit more in my first comment on Catarina's first post on this topic below.)

    A very simple example is: consider the formula $x = \underline{0}$. Its diagonalization is $\ulcorner x = \underline{0} \urcorner = \underline{0}$.

    On the other hand, Quine's informal version, and Smullyan's, diagonalization is defined using concatenation and naming (of expressions). I don't have my books with me, but if I recall right, the Smullyan diagonalization of an expression $E$ is simply $\ulcorner E \urcorner E$).
    The advantage of this version is that it avoids certain irrelevance involving variables. Quine's informal version (from memory, I'm away from my books) is:

    (L) "yields a false sentence when prefixed by its own quotation" yields a false sentence when prefixed by its own quotation.

    This requires only naming and concatenation. And (L) is the liar sentence.

    Corresponding to the notion of the diagonalization of a formula is the diagonal function, $d: \mathbb{N} \rightarrow \mathbb{N}$. This is recursive, and so is representable in Q, say by a formula $Diag(x,y)$.

    For any formula $\phi$ that we have, the fixed point formula $\theta$ is the diagonalization of the formula $\exists y(Diag(x, y) \wedge \phi(y)))$.
    (I.e., informally, "the diagonalization of $x$ is $\phi$''.)

    Then the diagonal lemma states: if $\phi$ is a formula with at most $x$ free, there is a sentence $\theta$ such that,

    $Q \vdash \theta \leftrightarrow \phi(\ulcorner \theta \urcorner)$.

    So, the Gödel sentence G is the diagonalization of "the diagonalization of $x$ is not a theorem of $T$''.

    Analogously, the liar sentence $\lambda$ is the diagonalization of "the diagonalization of $x$ is not true''.

    Jeff

    ReplyDelete
  6. Hi Alan and Jeff, thanks for the comments! What I think emerges from these points, and also the discussion at NewAPPS, is that there are several presumably equivalent but conceptually different notions of diagonalization floating around, and everyone seems to have their favorite one! So specifically wrt discussing the Livingston target article, the many-splendoredness of diagonalization makes things more complicated.

    I added an update at the NewAPPS post identical to this one (well, not identical anymore!), to clarify the bit about the diagonal lemma that Jeff refers to here, as I felt that as it stood the point about concatenation was unclear.

    ReplyDelete
  7. Jeff, Catarina: I have a confession to make which horrifies most logicians: I hate Latex! I have a couple of applications, Lyx, Texmaker but have never seriously tried to use them so don't really know how they work. But I got Texmaker to work on your text and can now read it, I hope it's come out properly, except I dont' think it's produced corner quotes.

    I don't think we disagree too much, my main point was you can't actually diagonalise using quotation. I'm not near my Quine books either but your memory of Quine's beautiful and ingenious example seems right to me. But it doesn't use pure quotation it also uses demonstrative pronouns 'its own quotation'. So it's not so different from 'this sentence isn't provable', which does not achieve self-reference by the context-free methods used in arithmetization of syntax.

    I think we disagree slightly on the formal stuff but I'm not sure my conversion of the tex worked. Thus I get you as saying

    consider the formal x = <+UL>0<-UL

    where I'm using <+UL> to indicate the zero is underlined but I see no corner quotes which is what I suspect 'ulcorner' was supposed to add. So my point was a) take ordinary quotes. I:

    'x=0'=0

    is not the diagonalisation of II:

    x=0

    for the singular term in I does not stand for I it stands for II. Quinean corner quotes do not alter this. If I use double quotes for them "x=0" is an expression with variables, something like x^'='^'0' with the ^ indicating concatenation and x a variable free for replacement by any name. Well if we replaced it by 'Senga' in both I and II, the singular term in I would still not refer to I.

    So maybe we do disagree: when you say 'it is the result of substituting the name of $\phi$ for all free occurrences of $x$ in $\phi$:' you are saying, in the above terms, that the diagonalisation of II should be a sentence which says something about II. My memory of the use of the term is that this is not so, the diagonalisation is a sentence which talks of itself, via the diagonalised open sentence.


    Godel's original method (I'm going by memory of the proof I don't have my van Heijenoort with me) would have diagonalised II by something like III

    Sub,Sk0) = 0

    where k is the Godel code of II, 'Sk0' is the canonical numeral naming the number k, '' the canonical numeral of that canonical numeral, the latter being the string consisting of k successor terms 'S' then '0'. The subject term in III thus actually refers to III by diagonalising on II because the substitution function outputs what you get when you substitute for a fixed variable the numeral which is input, here .

    In the diagonalisation lemma, things are done a little differently, there is no actual self-reference. Thus in your example, the sentence on the r.h.s of the biconditional which applies predicate Phi to a name of sentence theta doesn't actually name itself, but rather a different (but provably in Q equivalent) sentence theta.

    I'm sure somewhere in Derrida this is all made perfectly clear. Not (as my kids say in reverse Polish notation).

    ReplyDelete
    Replies
    1. Reverse Polish notation! Love that...

      What counts or does not count as self-referential is, perhaps surprisingly, non-trivial, as we have learned for example from Yablo's paradox and Graham Priest's reconstruction of it, on which it comes out as self-referential after all.

      I agree that in the post I go too quickly from concatenation to diagonalization, but in the end I think I still side with Jeff here.

      Delete
    2. I think I'll stick to my guns: if the godel code of I: x = 0, is forty two (of course it wouldn't be that low) then II: 42 = 0, does not diagonalise I, because '42' does not refer to II, it picks out I. Equivalently (using "'" for corner quotes here, to create names of codes) III: 'x = 0' = 0, is not a diagonalisation of I.

      Rather we need something like: IV: Diag(x) = 0;, if its code is 36 then V: Diag(36) = 0 (or if you like Diag('x=0') = 0, is the diagonalisation of IV. 'Diag(36)' picks out the result of substituting the standard numeral (using decimals for brevity) for thirty six for the chosen variable 'x' in the formula with code thirty six, i.e. sticking ('36' for 'x' in IV. That gives a singular term referring to V and the self-reference we want. (Fixed point lemmas are different, there's no direct self-reference.)

      We can do this with quantifiers and predicate expressions, rather than complex singular terms. Actually the Quine example (which I think should have 'append' rather than 'prefix') is more like that, it too doesn't give a diagonalisation of the one-place predicate. You just can't have an expression with a quotation name (in the older sense) '...' as a proper part where the proper part ... is the whole expression. Similarly on normal codings, you can't have a sentence contain a standard name of its own code.

      Delete
    3. I basically agree with you, in the sense that you need something more to give rise to the Godel-effect, so to speak. So what really does it to achieve this kind of self-reference in the formulation of the theorem we've been talking about is not only the fact of putting the name of the sentence in the place of a free-variable, but actually the diagonal lemma:

      G <==> ~P('G')

      This is how the sentence then comes to say of itself that it is not provable.

      But in a sense, I get the feeling that this is becoming a terminological dispute: what exactly should we call diagonalization? Just the concatenation part, or should we include the move that in fact establishes the self-reference? (in this case, the diagonal lemma) I think it does not matter too much as long as we agree on the details of the argument.

      Delete
    4. Catarina: yes, I think you are exactly right, I think this was largely terminological, sorry. (I've tried to clear up the terminology with Jeff below). One non-terminological bit might be this. One can do the Godel proof by direct self-reference: two steps as you say, 'diagonalising' in Jeff's sense, then doing that on a formula which effects self-reference,diagonalising in my sense. Or one can do it via the diagonalisation fixed point lemma. That 2nd route is different, and it seems to be getting conflated.

      I've just had a worry about etiquette here by the way. I enjoy looking in on M-Phil but am not up to being a hard-line blogger. When I discovered it was possible to comment as a guest, I just did so (and got a bit carried away as is my wont!). Hope that's ok. Best Alan

      Delete
    5. Alan, are you kidding me?! It's been great fun having you around (reverse Polish notation, age of negative 20...). You sound just the same as in person :)

      But seriously, I think this has been a fruitful discussion. The different meanings of 'diagonalization' had already come up in discussions at NewAPPS, but here you and Jeff really worked through the whole thing.

      Delete
  8. By the way one other (less pedantic I think) bee in my bonnet. As Catarina notes, the Godel incompleteness result only holds in special conditions, for example it requires the theorems of the logic to be recursively enumerable. Prior to Godel few thought that, well folk didn't have the concept of recursively enumerable but for example a great many of the key figures took logical proofs to include infinite proofs (of course concrete tokens of proofs are always small, but that doesn't mean that via abbreviation they can't be tokens of very large, finite or infinite, proof types).

    Godel in 1931 seems to have been sympathetic to finitism, that of course later vanished and his later attitude seems to have prevailed. So the reaction of Zermelo and Carnap: what the result shows is not that arithmetic is incomplete but that you need the omega rule in order to get negation-complete arithmetic- almost vanished leaving only a prejudice against infinitary logic as not 'real logic'.

    ReplyDelete
    Replies
    1. Interest in the omega-rule did come back later; I know for example that my former surpervisor Sundholm was working on this stuff as a doctoral student in Oxford in the late 1970s. But he himself seems to have dropped the interest, and I don't know if there are other people actively working on infinite proofs.

      Delete
    2. It takes time and patience to work on infinite proofs! Infinitary languages are much used by set theorists of course, and there are lots of studies, Karp, Dickmann etc. but I think philosophers tend to view this as using it as a technical tool, not something which can formalise our actual reasoning. Then again appeal to languages with infinitely long wffs (not just uncountably many constants) does pop out here and there- I think Hartry Field utilises it now and again and doesn't Jaegwon Kim appeal to infinitary disjunctions in some of his stuff on supervenience? I knew Goran in Oxford back then (I was negative 20 at the time though, so I'm not really that old), so maybe he sowed the seeds!

      Delete
    3. Which of the key figures thought logical proofs included infinite proofs? Certainly all of the examples of theories considered were r.e.

      Delete
    4. I am thinking here of Peirce, Schroeder, Lowenheim, early Hilbert (one could also cite Wittgenstein in the Tractatus though I certainly don't class him as an important figure in logic, truth tables notwithstanding).

      I'm relying on secondary rather than primary sources here, (people like Gregory Moore if I remember). Those logicians seemed to have treated quantification as infinitary disjunction and conjunction and so no interesting theory could be r.e. because the wffs themselves in general are all long. Moreover any useful proof theory when fully formalised (which didn't happen until later of course) will have to appeal to infinitely long proofs.

      Delete
    5. sorry: I wrote 'the wffs themselves in general are all too long', i.e. infinitely long, but scrubbed the 'too' when editing. There are such things as recursive versions of the omega rule where the infinitary rule is coded by a finite program, but in general wffhood, far less theoremhood, can't be r.e in a language with uncountably many sentences. And formalising languages with infinitary conjunctions and disjunctions in the obvious way, as in standard infinitary languages (I wouldn't count substitututional quantification as such a formalisation) leads to uncountably many wffs and proofs.

      Delete
    6. Hm. I just wouldn't call any of this an "infinite proof" in the relevant sense. For this, I think, we'd have to look at the tradition of the axiomatic method. I doubt that anyone every suggested that an infinite chain of inferences from Euclid's axioms, e.g., counts as a proof in geometry.

      Delete
    7. Sure- but how does that tell against the idea of infinitary proofs? As we know, infinitary proofs are structured too, just as much as finitary proofs. Take a standard architecture which has the proofs as functions from an initial segment of the ordinals, the output at each ordinal i being the ith line and with rules for how later lines must relate to earlier. If it's a sequent system you will have that, for example, if U(X_j) : &(A_j) appears at say a limit line lambda then X: A must occur at an earlier line for each assumption set X indexed by j and each conjunct A indexed by j. Why can't I say that this is infinitary &I and is how universal introduction would be cashed out in an infinitary logic with no quantifiers, quantification expressed by infinitary means? So arbitrary strings of wffs no more count as proofs in infinitary systems than the finitary, in both cases there has to be structure.

      Of course, if we as logic teachers raised in a tradition going through Frege and Tarski explain quantification as a sort of infinitary conjunction or disjunction we may will just view this way of putting things as a pedagogic aid. But prior to Tarski, and working in a distinct tradition from Frege, that couldn't quite have been how things were viewed. The later technical work on infinitary logic seems to me a reasonable way to introduce proof procedures in a way consonant with the idea of formalising generalisation by infinitary expressions which seems to have been prevalent in those logicians I mentioned.

      (Am I right about early Hilbert btw? I don't have many books readily at hand at the moment, but I'm sure I've read some Hilbert which seemed to take infinitary wffs and truth functions seriously, but maybe that's my dodgy memory again.)

      Still, looking over what I wrote I said 'key figures took logical proofs to include infinite proofs', so maybe that's wrong. What about 'took formal languages to include formulae which are infinitely long'. And then one might speculate that if a form of finitism hadn't set in in proof theory (it certainly didn't in model theory) then, as formalised proof systems for those languages came to be developed, infinitary logic would not have been viewed as a mere technicality for set theorists and a few odd students of Tarski's.

      Delete
  9. Aargh I used angled brackets in my Godel substitution example and that seems to have caused text to vanish. I'll use the % sign instead, to it should read:

    Godel's original method (I'm going by memory of the proof I don't have my van Heijenoort with me) would have diagonalised II by something like III

    Sub(%Sk0%,Sk0) = 0

    where k is the Godel code of II, 'Sk0' is the canonical numeral naming the number k, '%Sk0%' the canonical numeral of that canonical numeral, the latter being the string consisting of k successor terms 'S' then '0'. The subject term in III thus actually refers to III by diagonalising on II because the substitution function outputs what you get when you substitute for a fixed variable the numeral which is input, here %Sk0%.

    ReplyDelete
  10. Final, I hope correction: and II in the Godel case should actually be

    Sub(%x%,x) = 0

    God it's tricky getting these things right, (if only I had the ability to read texts closely which seems to be required in the intellectually exacting world of postmodernism).

    ReplyDelete
  11. Final, I hope,correction. And for the Godel example we have to change II to

    Sub(%x%,x) = 0

    God, it's tricky getting these things right. (If only I had the ability to read texts closely which seems to be required in the exacting world of postmodernism.)

    ReplyDelete
  12. Actually Jeff, maybe you are right about the Quine case: in 'its own quotation' we have an anaphoric not a demonstrative pronoun so this does seem to be an example of genuine self-reference using quotation. There is a style of computer program called a quine, isn't there, after this example: it outputs itself!

    ReplyDelete
    Replies
    1. Hi Alan, there are two separate things here,


      (a) the syntactic diagonalization operation on a formula $\phi$.

      
(b) the fixed point sentence $\theta$ for a given formula $\phi$.



      These are not automatically the same! 
The diagonalization of $\phi$ is simply $\phi(x/\ulcorner \phi \urcorner)$. That is, the result of substituting the quotation $\ulcorner \phi \urcorner$ for all free occurrences of $x$ in $\phi$. (The quotation of $\phi$ is the numeral $\underline{n}$, where $n$ is the code of $\phi$.)



      To obtain the fixed-point formula, say $\theta$, one needs to have the syntactic notion of diagonalization *represented* in the theory. For this, $Q$ is sufficient, as all recursive functions are representable in $Q$. But suppose, to get a more intuitive feel, it is given by a function symbol $diag(x)$, introduced into $PA$ by explicit definition (this is possible because $PA$ proves the totality of all primitive recursive functions).

      

Then, given a formula $\phi$, the fixed-pont formula $\theta$ is the diagonalization of $\phi(diag(x))$. I.e., $\theta$ is $\phi(diag(\ulcorner \phi(diag(x)) \urcorner))$.


      And then we get the conclusion of the diagonal lemma:



      $PA \vdash \theta \leftrightarrow \phi(\ulcorner \theta \urcorner)$.


      
So, for special cases,


      (i) $G$ is $\neg Prov_{PA}(diag(\ulcorner \neg Prov_{PA}(diag(x)) \urcorner))$.

      
(ii) $\lambda$ is $\neg T(diag(\ulcorner \neg T(diag(x)) \urcorner))$.



      About quining and programs, yes, that's right. I think the term "quining" may have originated with Douglas Hofstadter. Presumably Smullyan was influenced by Quine in giving his version (I don't have books with me as I'm at the Westbahnof in Vienna, but if I remember right, Smullyan's approach is explained in his article in the volume Philosophy of Mathematics, edited by Hintikka).



      Cheers, Jeff

      Delete
  13. Here is Paul Livingston's response to my post (and to the other posts), in case anyone is interested;

    http://www.newappsblog.com/2012/03/paul-livingston-responds-to-the-new-apps-symposium-on-derrida-and-formal-logic.html

    ReplyDelete
    Replies
    1. One observation on his response. He writes

      'It is worth noting, as well, that the actual proof of Gödel’s theorems, though it is about formal systems of a certain sort, is typically carried out in natural language (and must, at any rate, be carried out in a metalanguage)'

      So it looks as if he thinks the first incompleteness proof can't be formalised and carried out in the theory T we show negation-complete. But, at least with the right conditions on T,that's not true and that's the starting point for the usual version of the 2nd incompleteness proof. Maybe he is thinking just of the informal semantic version of the first theorem.

      Delete
  14. Oops - Alan, yesterday, I tried to repair my earlier comment (its latex had a missing tag). But I think it removed another comment by you whee you'd replied to it?

    Jeff

    ReplyDelete
    Replies
    1. Jeff: I don't think so- there is too much there anyway! My clear and distinct impression that 'to diagonalise' means to generate self-reference via the diagonalising function has gone the way of a lot of my clear and distinct perceptions and memories: a standing refutation of Descartes! Apologies for going on at cross purposes.

      What had stirred me to comment was a worry that, in the discussion on arithmetic and incompleteness and then this one, the brilliance of Godel's achievement wasn't coming across. 'Quotation name' set off alarm bells because it makes me think of quotation not godel naming. But more than that, though arithmetization of syntax and diagonalising (as standardly construed) are neat, they aren't sufficient to get us to Godel's results (nor necessary as the comments on arithmetic and incompleteness make clear).

      What showed Godel's genius was his seeing that the diagonalisation operation could be expressed arithmetically, was primitive recursive (well he had the minor task of coming up with the whole concept of p.r there) and, whilst not doing all the donkey work, sketching how the function is representabile and thereby enables us to achieve self-reference purely formally. Not everyone could have come up with that!

      But you had already made point that it's diagonalising on a formula which already contains a representation of the diagonal function which does the work at the end of your first posting here- I didn't read that properly partly because I was trying to decode the latex code without compiling it (and when I did the corner quotes didn't come out. How do you make commands like \ulcorner work when they aren't recognised? Get a better editor?)

      The distinction between Godelian self-reference and the later fixed point lemma was also in my mind because it's been discussed a bit recently. On the other hand, as your examples show, a self-referential sentence yields an instance of the fixed point diagonalisation lemma, it's just that the latter is more general.

      Delete
    2. Actually Jeff I think you did remove a comment. How did you manage that? That's a pint you owe me!

      If I remember,I was just realising I was at cross purposes and trying to translate, noting that I think we essentially agreed. Sticking a name of an open sentence into the free variable place of that sentence (diagonalising) doesn't do the job on its own, the special class where the formula we diagonalise on is Phi(diag(x)) (or equivalent, if not using function terms)is where the action is.

      Something like that, so covered by the previous no worries. I do have a clear and distinct memory, though, that I added as a parenthetical comment a conclusive proof of Goldbach's conjecture. And now I can't remember how it went, and if you haven't cached it, it's gone! So in fact that's two pints you owe me. Alan

      Delete
  15. Hi, sorry, nothing deep to say, just pointing out a (likely) typo: Did you mean Andrzej Grzegorczyk, or is there some other guy named Grzegorczyc?

    ReplyDelete
  16. Hi Rafal, "Did you mean Andrzej Grzegorczyk"

    Yes, E.g.,

    http://www.impan.pl/~kz/files/AGKZ_Con.pdf

    Jeff

    ReplyDelete
  17. Riochard wrote: "Which of the key figures thought logical proofs included infinite proofs?"

    Will Brouwer and Zermelo do as key-figures?

    If so, for Brouwer see the famous footnote to 'Defintionsbereiche ...' in Van Heijenoort(ed.), From Frege to Gödel, footnote 8, p.446 For Zermelo, check the locations given by Kreisel in footnote 22 , p.511 of Myhill & Kino (eds.), Intuitionism and Proof Theory.

    R G Taylor has interesting articles on Zermelo and infite proofs

    ReplyDelete