Is the Role of Truth Merely Expressive?
I worked (a bit) on truth theories in my PhD. I was interested in the question of whether the role of truth is "merely expressive" (as deflationists claimed) or whether it might be "explanatory" (as non-deflationists might claim). One day in 1997, the thought
For the case of Hilbertian finitism/instrumentalism, the aim of Hilbert was to show that "Cantor's Paradise" (of infinite sets) was a convenient, but dispensable, instrument for proving finitary combinatorial facts. The hope was to prove Cantorian set theory consistent using just finitary assumptions, perhaps as encoded in $\mathsf{PRA}$ or $\mathsf{PA}$:
It is this insight about increasing power which in part leads to the idea of an "Interpretability Hierarchy", an idea pursued vigorously by Harvey Friedman for several decades. The idea is that, with certain exceptions, mathematical theories form an ascending linear hierarchy of interpretability strength, with weak systems at the bottom (e.g., $\mathsf{Q}$ and $\mathsf{AST}$ (Visser's Adjunctive Set Theory)) and very powerful set theories, with large cardinal axioms, much higher up.
Similarly, for the case of Fieldian nominalism. Consider, for example, Hartry Field's nominalistic reformulation of Newtonian mechanics and gravity in his brilliant monograph,
These Gödelian objections were raised by John Burgess, Yiannis Moschovakis, and Field mentions them in the Appendix of his monograph; I believe similar objections were raised but not published by Saul Kripke; and shortly after, the objections were spelt out in a bit more detail by Stewart Shapiro in his:
Around the same time, Stewart Shapiro developed more or less the same argument, in his
The argument that Shapiro and I gave can be summarized like this (see this M-Phi post "Reflective Adequacy and Conservativeness" (17 March 2013)):
Paul Horwich has an interesting interview with 3am Magazine, titled "Deflationism and Wittgenstein". In it, he makes a couple of crucial points concerning his own form of deflationism, which make it clear that he endorses some form of instrumentalism:
In philosophy of logic, this point appeared quite a while later, in W.V. Quine's Philosophy of Logic (1970) and then
The presence of "merely" is crucial. Remove the "merely", and the above becomes a theorem of mathematical logic! (As mentioned above.) But with "merely" added, it becomes an interesting philosophical claim.
It is not entirely clear what "re-expresses" means. If "re-expresses" means that one can infer one from the other, then although, in fact, one can infer all instances of (2) from (1) (in some fixed language, assuming disquotation sentences $T(\ulcorner \phi \urcorner) \leftrightarrow \phi$ with $\phi$ in the $T$-free language), one cannot infer (1) from all instances of (2), for compactness reasons. There has been an interesting mini-literature on this topic, and important papers on the topic are:
Furthermore, one can formalize the truth theory within some object language, containing a good theory of truth bearers and a truth predicate governed by certain axioms, and examine its properties and behaviour. We might extend a theory we accept, such as $\mathsf{PA}$, with a truth predicate $T(x)$, with compositional truth axioms. The result is what Volker Halbach calls $\mathsf{CT}$ in his monograph
Any such truth theory is therefore non-conservative, at least with respect to a large class of of "base theories" ("object language theories", in Tarski's terminology). From this, surely the role of the truth predicate is not "merely expressive". And, if that's right, then the resulting truth theory is non-deflationary.
Later in the interview, Horwich suggests:
[Updates (16,17 August). I added some more material related to the reflective adequacy condition on truth theories (Hannes Leitgeb's condition (b) in his article about adequacy conditions on truth theories). I added a link to the paper by Richard Heck on T-sentences and the issue of the role of disquotational truth axioms in the "re-expression"of schematic generalizations.]
"I wonder if the T-scheme is conservative over $\mathsf{PA}$?"popped into my head. (I remember the incident quite well!) If so, there might be an interesting analogy between three kinds of instrumentalism:
(i) Instrumentalism about infinitary objects in mathematics (Hilbert)Debates surrounding the first two eventually turned on whether an extended "ideal" theory $T_I$ was conservative with respect to the underlying "real" one $T_R$. And I knew that, in some cases, the extended theory is not conservative: for Gödelian reasons.
(ii) Instrumentalism about mathematical objects in connection with physics (Field).
(iii) Instrumentalism about semantic properties (Field, Horwich).
For the case of Hilbertian finitism/instrumentalism, the aim of Hilbert was to show that "Cantor's Paradise" (of infinite sets) was a convenient, but dispensable, instrument for proving finitary combinatorial facts. The hope was to prove Cantorian set theory consistent using just finitary assumptions, perhaps as encoded in $\mathsf{PRA}$ or $\mathsf{PA}$:
In the early 1920s, the German mathematician David Hilbert (1862-1943) put forward a new proposal for the foundation of classical mathematics which has come to be known as Hilbert's Program. It calls for a formalization of all of mathematics in axiomatic form, together with a proof that this axiomatization of mathematics is consistent. The consistency proof itself was to be carried out using only what Hilbert called “finitary” methods. The special epistemological character of finitary reasoning then yields the required justification of classical mathematics. (Richard Zach, 2003, "Hibert's Program", SEP.)But, it follows from Gödel's results that a consistent finitary theory $T$ of arithmetic can be extended with a suitable amount of Comprehension (one extends induction too), and the result, $T^{+}$, proves theorems in the language of $T$ that $T$ doesn't. The most well-understood examples involve a theory $T$ and its impredicative second-order extension $T^{+}$. For example, if we pass from $\mathsf{PA}$ to second-order arithmetic $\mathsf{Z}_2$. But there are other kinds of example.
It is this insight about increasing power which in part leads to the idea of an "Interpretability Hierarchy", an idea pursued vigorously by Harvey Friedman for several decades. The idea is that, with certain exceptions, mathematical theories form an ascending linear hierarchy of interpretability strength, with weak systems at the bottom (e.g., $\mathsf{Q}$ and $\mathsf{AST}$ (Visser's Adjunctive Set Theory)) and very powerful set theories, with large cardinal axioms, much higher up.
Similarly, for the case of Fieldian nominalism. Consider, for example, Hartry Field's nominalistic reformulation of Newtonian mechanics and gravity in his brilliant monograph,
Field, H. 1980: Science Without Numbers.(Tragically out of print.) This theory $T$ can be extended with a suitable amount of Comprehension (one extends certain schemes in $T$ too), and the result, $T^{+}$, proves theorems in the nominalistic language that $T$ doesn't.
These Gödelian objections were raised by John Burgess, Yiannis Moschovakis, and Field mentions them in the Appendix of his monograph; I believe similar objections were raised but not published by Saul Kripke; and shortly after, the objections were spelt out in a bit more detail by Stewart Shapiro in his:
Shapiro, S. 1983: "Conservativeness and Incompleteness" (Journal of Philosophy)So, knowing much of this, I began to see whether the same situation held for truth/semantics, and eventually it became clear that the analogy did hold quite well (I got some help here from John Burgess). In some cases, truth axioms are conservative: e.g., disquotational axioms, $T(\ulcorner \phi \urcorner) \leftrightarrow \phi$, with the sentence $\phi$ used restricted to the object language. And in some cases they are not (e.g., Tarski-style compositional truth axioms). At the time, I had no idea that these technical issues had been investigated by other authors (Feferman, Friedman, Sheard, Cantini, Halbach), though I found out quite quickly after my paper on the topic appeared! My paper on this is:
Ketland, J. 1999: "Deflationism and Tarski's Paradise" (Mind)("Tarski's Paradise" is a joke, alluding to the analogy mentioned above with "Cantor's Paradise".)
Around the same time, Stewart Shapiro developed more or less the same argument, in his
Shapiro, S. 1998: "Proof and Truth - Through Thick and Thin" (Journal of Philosophy)This kind of objection---the "conservativeness argument", as it's been called---to deflationism argues that if deflationism is understood as claiming that the role of truth is "merely expressive", then it must require conservativeness of one's truth theory; but then this is inconsistent with the manifest non-conservativeness of truth axioms, in certain cases. More recently, some authors have followed Shapiro and myself in calling truth theories "deflationary" or "non-deflationary" depending on whether they are conservative or not, and explicitly restricting to conservative truth theories if they wish to defend deflationism. This issue is complicated and usually depends on whether one allows inductive reasoning using the truth predicate.
The argument that Shapiro and I gave can be summarized like this (see this M-Phi post "Reflective Adequacy and Conservativeness" (17 March 2013)):
(P1) A truth theory is deflationary only if conservative over suitably axiomatized theories $B$.For a similar formulation and discussion, see also:
(P2) A truth theory is reflectively adequate only if it combines with $B$ to prove "all theorems of $B$ are true".
(P3) For many cases of $B$, reflective adequacy implies non-conservativeness.
-----------------------------------------------------------------------
(C) So, deflationary truth theories are reflectively inadequate.
Armour-Garb, B. 2012: "Challenges to Deflationary Theories of Truth" (Philosophy Compass)The second premise (P2) corresponds to Hannes Leitgeb's adequacy condition (b) in his:
Leitgeb, H. 2007: "What Theories of Truth Ought to be Like (But Cannot Be)" (Philosophy Compass)Leitgeb formulated (P2) as follows:
"(b) If a theory of truth is added to mathematical or empirical theories, it should be possible to prove the latter true"Leitgeb adds that this is "uncontroversial".
Paul Horwich has an interesting interview with 3am Magazine, titled "Deflationism and Wittgenstein". In it, he makes a couple of crucial points concerning his own form of deflationism, which make it clear that he endorses some form of instrumentalism:
Thus truth is not as profound a phenomenon as has often be assumed. Its role, even in philosophy, must be merely expressive rather than explanatory.The idea here is that the single sentence:
(1) for any proposition $x$, if $A$ asserts $x$, then $x$ is true"re-expresses" the scheme:
(2) if $A$ asserts that $p$, then $p$.It is certainly the case that recursively enumerable theories $S$ in some language $L$ meeting certain conditions can be re-axiomatized as a finite set of axioms, by introducing a satisfaction predicate $Sat(x,y)$. This result was originally given by S.C. Kleene (see here). See,
Craig and Vaught 1958: "Finite Axiomatizability using Additional Predicates" (JSL)for a strengthened version of Kleene's results.
In philosophy of logic, this point appeared quite a while later, in W.V. Quine's Philosophy of Logic (1970) and then
Leeds, S. 1978: "Theories of Reference and Truth" (Erkenntnis).Quine was probably not advocating the deflationary view, and was instead endorsing Tarski's semantic conception of truth. It is a serious error to insist that Tarski was a "deflationist". He argued strongly against the deflationary view of this time, "the redundancy theory". Most contemporary arguments against deflationism are due, in fact, to Tarski. E.g., both the problem of generalizations and the non-conservativeness of axiomatic truth are there in his classic:
Tarski, A. 1936: "Der Wahrheitsbegriff in den formalisierten Sprachen" (Studia Philosophica)But Leeds was putting forward the deflationary view, and many others have followed suit, including Horwich and Field: the deflationary claim is that the reason that languages contain a truth predicate is so that schemes like (2) can be "re-expressed" as single sentences, like (1). Hence the slogan that
"Truth is merely a device for expressing schematic generalizations"
It is not entirely clear what "re-expresses" means. If "re-expresses" means that one can infer one from the other, then although, in fact, one can infer all instances of (2) from (1) (in some fixed language, assuming disquotation sentences $T(\ulcorner \phi \urcorner) \leftrightarrow \phi$ with $\phi$ in the $T$-free language), one cannot infer (1) from all instances of (2), for compactness reasons. There has been an interesting mini-literature on this topic, and important papers on the topic are:
Halbach, V. 1999: "Disquotationalism and Infinite Conjunctions" (Mind)
Heck, R. 2005: "Truth and Disquotaiton" (Synthese)Even if the "re-expression" component of deflationism could be clarified and sustained, we still need to understand what "explanatory" comes to, and whether the principles governing truth (or a truth predicate) are never explanatory. In mathematics, there are some proofs considered non-explanatory and some considered explanatory. In mathematical logic, semantic reasoning is sometimes used. After all, one makes use of notions like
"$\phi$ is true in the structure $\mathcal{A}$".The following example is of the kind given in Stewart Shapiro's "Proof and Truth ..." cited above:
Question: If $G$ is a Godel sentence for $\mathsf{PA}$, it is true. Why is it true?
Answer: Because each axiom of $\mathsf{PA}$ is true, and derivations preserve truth; therefore all theorems of $\mathsf{PA}$ are true. In particular, $G \leftrightarrow \neg Prov_{\mathsf{PA}}(G)$ is a theorem of $\mathsf{PA}$, and is therefore true. So, $G$ is true if and only if $G$ is not a theorem of $PA$. But since all theorems of $\mathsf{PA}$ are true, it follows that if $G$ is a theorem of $\mathsf{PA}$, then $G$ is true. So, $G$ is not a theorem of $\mathsf{PA}$. And therefore $G$ is true.I should note that one can respond to this by insisting that consistency (rather than soundness) is sufficient to obtain the conclusion that $G$ is true. Then the question turns into one about why one should think $\mathsf{PA}$ is consistent.
Furthermore, one can formalize the truth theory within some object language, containing a good theory of truth bearers and a truth predicate governed by certain axioms, and examine its properties and behaviour. We might extend a theory we accept, such as $\mathsf{PA}$, with a truth predicate $T(x)$, with compositional truth axioms. The result is what Volker Halbach calls $\mathsf{CT}$ in his monograph
Halbach, V. 2011: Axiomatic Theories of Truth.It seems to me that, within $\mathsf{CT}$, the role of truth is not merely "expressive". For
- $\mathsf{CT}$ proves "All theorems of $\mathsf{PA}$ are true";
- $\mathsf{CT}$ proves new arithmetic facts beyond what $\mathsf{PA}$ does (in particular, coded consistency facts).
Any such truth theory is therefore non-conservative, at least with respect to a large class of of "base theories" ("object language theories", in Tarski's terminology). From this, surely the role of the truth predicate is not "merely expressive". And, if that's right, then the resulting truth theory is non-deflationary.
Later in the interview, Horwich suggests:
Absent a demonstration of this – absent some evidently good theory in which truth is not just a device of generalization – then to speak of deflationism robbing us of a valuable tool is to beg the question.But we do, at least plausibly, have a demonstration of this: the compositional theory $\mathsf{CT}$.
[Updates (16,17 August). I added some more material related to the reflective adequacy condition on truth theories (Hannes Leitgeb's condition (b) in his article about adequacy conditions on truth theories). I added a link to the paper by Richard Heck on T-sentences and the issue of the role of disquotational truth axioms in the "re-expression"of schematic generalizations.]
Hi Jeff,
ReplyDeleteIt's funny that you posted on this topic -- I actually have a draft paper responding to your and Shapiro's argument (which I'd be happy to send to you or anyone else interested). The very short version: I agree that the deflationist is committed to the conservativeness of the truth theory over the base theory (i.e. in this case PA), but I think everything is going to turn on whether conservativeness is understood in terms of proof-theoretic consequence or in terms of a stronger notion of semantic consequence (for example, second-order consequence under the standard "full" semantics, although there are other alternatives that I go into in the paper).
The reason this matters is because the deflationist can make a plausible disjunctive reply to the challenge, depending on whether or not we have can defensibly be said to have a conception of arithmetic that outstrips what is provable from PA. If we do, then (presumably) we have to grasp a notion of consequence that goes beyond proof-theoretic consequence, and the relevant sense of conservativeness will be defined in terms of this notion; but that is no problem for the deflationist, since G is then going to be a genuine consequence of the axioms, and the compositional truth theory you call CT will be conservative in the relevant -- semantic -- sense. But if we don't have such a conception of arithmetic, then the reasons for requiring the derivation of G seem to lapse altogether; in this case, the deflationist can accept a truth theory that is weaker than CT -- and that is in fact proof-theoretically conservative -- with a clear conscience.
Hello Dan, many thanks.
ReplyDeletePlease send me your paper, that sounds interesting. My email is:
jeffrey.ketland@philosophy.ox.ac.uk
Volker Halbach has argued that the deflationist doesn't have to accept conservativeness (you probably know this - it's "How Innocent is Deflationism?", Synthese, 2001.)
Your reply sounds a bit like what Stewart Shapiro argued for in the second part of his "Proof and Truth" paper, where he discusses second-order consequence. I'm interested to have a look.
Cheers,
Jeff
Hi Jeff,
ReplyDeleteThe main point I took away from the Halbach paper you mention was that the proper formulation of the conservativeness requirement needs be finessed to include the theory of syntax we're working with. I wasn't convinced by the other arguments as to why conservativeness isn't required of a deflationary theory; I think it would be better to say that the theory should be conservative, just relative to a different notion of consequence.
My main disagreement with Shapiro is that while he seems to think that moving to a strengthened notion of logical consequence is somehow problematic for the deflationist by building in a committment to the "robustness" of truth, I see no reason to think this is the case. Rather, the pressure to move to such a notion of consequence comes from the idea that we have a categorical conception of the natural numbers. Of course, there are legitimate worries here, both about the ontological commitments of the logical resources required for this (e.g. Quine's scepticism about second-order logic) and about the epistemic tractability of the kind of non-effective consequence relation that would be needed. But, as far as I can tell, if these are problems, they are problems for everyone -- I don't see how conceiving of truth as robust is supposed to help solve them in a way that the deflationist in particular is unable to accept.
I've sent the paper along -- hope you find it of interest!
Hey Dan, could you send me the paper too? Sounds very sensible.
ReplyDeleteThanks, Dan - got it - very interesting!
ReplyDeleteI see you mention Richard Heck's older paper on the disentangling syntax issue. I don't think Richard published it. But did you see the article by Carlo Nicolai and Graham Leigh about disentangling syntax? This is now going to appear in Review of Symbolic Logic. I'm pretty sure you can find this by googling.
Hannes, Volker and I have been discussing this issue for 6 or 7 years, so I'm glad something is appearing on the topic. Right - it's important to separate syntactical induction and arithmetic induction. Ideally, I think the truth theory per se should occur in a theory already containing syntactical induction. In a reply to a nice article by Cieslinski (Mind, 2009), I argue pretty much as Richard does, that the non-conservativeness is, ultimately, due to the compositional truth axioms.
(Cf., everyone agrees that Impredicative Comprehension is powerful. I.e.,
$\exists X \forall n(n \in X \leftrightarrow \phi(n))$
where $\phi(n)$ is from $L_2$ (not containing $X$ free). But if you add this to PA and *restrict induction* to $L$-sentences, then, I am fairly sure, the result remains conservative over PA.)
I think one of the results you mention in the paper, taken from Shapiro 2002, mightn't be quite right: i.e., the claim that compositional truth (without induction) proves that "deduction preserves truth". For a single step, e.g., Modus Ponens, this is ok. But to show that,
"if x is a deductive consequence of y1, ..., yn, and y1, ..., yn are true, then x is true"
you need induction using the truth predicate. Cieslinski's 2009 paper has information about this. This is closely related to the reflection principle for *logic*.
Oh, Volker's proof in his book that CT(restricted) is conservative isn't quite right either! This problem was spotted by Kentaro, and has been fixed now in other work by Visser & Enayat and Leigh. I linked to a preprint of Graham's paper a week or so ago.
Cheers,
Jeff
Hi Dan,
ReplyDeleteAnother, but crucial, issue is the reflection condition that Shapiro (1998) and I (1999) give as an adequacy condition on truth theories.
Hannes Leitgeb 2007 ("What Theories of Truth Should be Like (but Cannot Be)" describes it as follows:
"(b) If a theory of truth is added to mathematical or empirical theories, it
should be possible to prove the latter true"
If I understand correctly, you're rejecting this adequacy condition?
Cheers,
Jeff
Thanks for inadvertently pointing me to the Nicolai and Leigh paper.
ReplyDeleteHaving finished the two Frege books, I'm now finally trying to finish my old truth paper. I ended up deciding it was trying to do too much, so I'm exploding it into several pieces. One of these is on the issues discussed here. I'll email it to both of you.
Oh, Jeff, the discussion of Volker's 1999 paper is in "Truth and Disquotation".
ReplyDeleteThanks, Richard - I'll update.
ReplyDeleteJeff
Hi Jeff,
ReplyDeleteThanks very much for those references -- I'll be very interested to work through them.
I'm certainly committed to rejecting the condition you mention on the "merely proof-theoretic consequence" disjunct of my response. On the other disjunct, I see no reason to reject it.
I do have something to say as to why this is independently motivated. The condition presumably isn't: for each theorem of the base theory, the truth theory proves the truth of that theorem, since that is going to be obtainable from the fact that the truth theory entails every instance of the T-schema. Rather, it presumably means that that reflection principles are derivable: that the truth theory proves a sentence expressing that every theorem of the base theory is true. And this is going to be a complex arithmetical sentence. (Which sentence, exactly, will depend on the details of our arithmetization of syntax).
But how is such a sentence to be interpreted in a situation where (by hypothesis, on this disjunct of the reply) we accept or at least don't have the ability to rule out non-standard models of arithmetic? We are, in effect, quantifying over non-standard numbers as well as standard numbers in making these arithmetical claims and so (I claim) it's highly dubious that they express the syntactic notions (like provability or theoremhood) that we ordinarily take them to express. For instance, any formula that is satisfied by infinitely many standard numbers is satisfied by a non-standard number; so since there are infinitely many theorems of PA, there are going to be some non-standard numbers that satisfy the predicate Thm_PA. So if we accept the reflection principle, the truth of (the sentences coded by) any such non-standard number follows, and it's just entirely unclear what this could even mean (and so entirely unclear what generalizations that entail it mean either). In short, I don't see why we'd even feel compelled to believe e.g. the reflection principle for a theory, let alone demand that our truth theory prove it, in a context where non-standard models of arithmetic are being countenanced.
(It might be thought that this trades on a conflation between syntax and a representation of syntax in arithmetic. But I don't think it does. Suppose we take care to separate syntax and arithmetic, employing additionally e.g. a theory of characters and strings. This is still going to generate non-standard models of syntax, i.e. models containing strings that aren't obtainable by starting with the null string and appending finitely many characters.)
Richard -- I'd love to see the paper when it's ready. My email is danielwaxman@gmail.com. Thanks!
Hi Dan, thanks!
ReplyDeleteBut how is such a sentence to be interpreted in a situation where (by hypothesis, on this disjunct of the reply) we accept or at least don't have the ability to rule out non-standard models of arithmetic?
I'm not sure how non-standard models enter the picture. There are non-standard models of these comments here! So, an immediate problem with this is that, when you speak, how do you rule out non-standard models of your statements about "non-standard models"? That is, is your predicate "M is a non-standard model" meaningful? If so, what does it mean?
So if we accept the reflection principle, the truth of (the sentences coded by) any such non-standard number follows
Is the idea that when someone accepts that every theorem of PA is true, then this refers to non-standard numbers? For example, does "every prime have a prime larger" refer to non-standard numbers?
Suppose we take an empirical theory, T. Then this has non-standard models too. Does this cause a problem for what "every theorem of T is true" means?
Cheers,
Jeff
Hi again, Dan
ReplyDeleteThis is more of a technical issue.
You say in the paper that the restricted theory that Volker calls $\mathsf{CT}_{\upharpoonright}$ proves,
(T-Ax) $\forall \phi(Ax_{PA}(\ulcorner \phi \urcorner) \to T(\ulcorner \phi \urcorner))$
(T-Inf) $\forall \phi \forall \psi \forall \chi((Inf_{PA}(\phi, \psi, \chi) \wedge T(\ulcorner \phi \urcorner) \wedge T(\ulcorner \psi \urcorner)) \to T(\ulcorner \chi \urcorner))$
I may be misremembering, but I think Shapiro may have written that these are theorems (in his 2002). But, again, If I remember right, these aren't theorems. One would need induction to prove them.
E.g., one uses induction to prove "all induction axioms of PA are true". If the second amounts to the global reflection principle for logic, then again this isn't a theorem of $\mathsf{CT}_{\upharpoonright}$, by results in Cieslinski 2009.
But this would have to be checked, as I'm going from memory!
Cheers,
Jeff
Hi Jeff,
ReplyDeleteWhat I call T-Inf isn't supposed to amount to the global reflection principle for logic; Inf_PA is supposed to express only one-step inferences (e.g. instances of MP or Gen only), not arbitrarily long chains of inferences. You're probably right that I got the result from Shapiro 2002; I don't have that paper to hand, but I'll go back and check that I have the theorems right and that the proofs are sound.
As for the issue about non-standard models: I think it matters that the response is disjunctive, and that in particular we're operating within the disjunct according to which we can't be said to have a grasp of a non-effective consequence relation; and I take it that this is tantamount to failing to be able to rule out non-standard models. So it's not that I subscribe to a general scepticism about our ability to single out an intended model; far from it! Just that if such a scepticism is warranted (something which I don't take a stand on in the paper), then the whole machinery of formalizing syntax breaks down.
Obviously this is all very compressed, so I should probably revise the paper and spell it out in more detail. I'll send it to you when it's finished, if you'd like.
Hi Dan,
ReplyDeleteYes, I get it - thanks. So, T-Inf is ok in that case, as it's really just the Tarskian truth axiom for $\to$. But to prove T-Ax, "all axioms of PA are true", one would need an instance of induction with the truth predicate, and that isn't available. In the new papers by Leigh and Nicolai & Leigh, they give some results about the conservativeness of simply adding T-Ax.
Cieslinski 2009 has some useful information on reflection for logic, i.e., the global reflection principle for logic:
(L) for all $\phi$, if $\phi$ is a theorem of logic, then $\phi$ is true.
If I recall Cieslinski's results right, this reflection principle cannot be proved in $\mathsf{CT}_{\upharpoonright}$. But if you add this to $\mathsf{CT}_{\upharpoonright}$, then you get $\mathsf{CT}$ back, because one can prove all induction instances for $L$. (I may not be remembering this exactly.)
The issue of non-standard models is more complicated philosophically. I reread your section again, and understand it much better.
I think the problem with this kind of scepticism (in effect, as you make clear) is that even simple talk about syntactical entities, e.g., a claim like,
(S) for any strings $\sigma_1, \sigma_2$ in $L$, there is a concatenation of them.
has to be thought of as semantically indeterminate, because we might be referring to non-standard strings. And if the semantics of sentences about syntactical entities is indeterminate, then certainly discussion of "non-standard models" is itself indeterminate too. This then gets into heavy-duty metasemantics ... (a formalist might think that *syntax* (e.g., the axioms we "accept") has to determine the language L spoken by a speaker; I don't see why this has to be so, but it's a huge topic ...)
I sort of agree with Volker (2001) that this kind of response is a trap, or at least, a heavy price to pay, for the deflationist. The reflection condition that Shapiro, I and Leitgeb impose seems fairly easy to formulate, even uncontroversial, for finitely and schematically axiomatized theories. If I understand truth for the language of a theory B and I accept B, I should be able to prove "all theorems of B are true".
You mention violations of reflection adequacy, suggesting $\mathsf{PRA}$. Yes, there are violations of this! $\mathsf{PRA}$ has infinitely many primitive function symbols and thus the Tarskian semantic axioms will have infinitely many axioms, one for each function symbol. Consequently, one will not be able to prove
"all axioms of $\mathsf{PRA}$ are true"
A simpler example is just to have a language $L$ with a constant $a$ and infinitely many primitive unary symbols $P_i$, and a theory $B$ with the axioms
$P_0(a)$
$P_1(a)$
$\dots$
$P_i(a)$
$\dots$
The compositional truth theory then cannot prove "all axioms of B are true". This example was mentioned to me by Torkel Franzen. I mention this kind of case in my reply to Cieslinski (my reply is "... Reply to Cieslinski", Mind).
For the case of $\mathsf{PRA}$, this problem is overcome when moving to $\mathsf{PA$, because for each primitive recursive function $f : \mathbb{N}^k \to \mathbb{N}$, there is a definition of $f$ using just $0,s,+,\times$, that works in $\mathsf{PA}$.
Cheers,
Jeff
[... when moving to $\mathsf{PA}$, because ...]
ReplyDeleteNo LaTeX preview ...
Jeff