## Monday, 25 April 2011

### More on the Validity Predicate

Jeff has posted about adding a validity predicate to arithmetic here. I have been thinking about this, and have a different twist. Assume that we add a logical validity predicate Val(x, y) to arithmetic (in what follows, F is the Godel code of formula F). Val(F, G) holds iff the argument from F to G is logically valid. Now, one rule that a logical validity predicate ought to satisfy is:

VS2: (Val(F, G) & F) entails G.

The second rule that a logical validity predicate ought to satisfy is:

VS1: If F entails G, then Val(F, G)

The trick here is that we need to decide, when applying VS1, which sense of 'entails' we have in mind. Should we conclude that Val(F, G) holds if:

1. G is derivable from F in first-order logic?
2. G is derivable from F in first-order logic supplemented with VS1 and VS2?
3. G is derivable from F plus arithmetic?
4. G is derivable from F plus the T-schemas?
5. etc.

There are extant arguments that show that options 3 and 4 are inconstent (Beall & Murzi [unpublished] and Shapiro  show this for 3, and Whittle  shows this for 4). But, of course, is is rather implausible that either arithmetic or the T-schemas are logically valid (of course, their arguments do show that other interesting notions of validity are inconsistent). To remind you, the derivation of a contradiction for case 3 goes like this. Diagonalization provides a sentence P such that:

P is arithmetically equivalent to Val(P, #)

where "#" is some arbitrary contradiction. We then reason as follows:

1. P Assumption.
2. Val(P, #) 1, Diagonalization.
3. # 1, 2, VS2.
4. Val(P, #) 1 – 3, VS1.
5. P 4, Diagonalization.
6. # 4, 5, VS2.

As Jeff notes in his earlier contribution, 1 is consistent (in fact, we don't need to add a new predicate at all, since the relevant notion of validity is definable in PA). Thus, if 'entails' means derivable in first-order logic, then we can consistently add the rules above to arithmetic.

As a result, the intuitive rules for logical validity (unlike, strikingly, the intuitive rules for the truth predicate) are truth-preserving and consistent. But are they themselves logically valid? In other words, can we add versions of VS1 and VS2 to arithmetic where 'entails' means derivable from first-order logic plus VS1 and VS2? Interestingly, the answer is "no", as the following derivation demonstrates. Let Q be the conjunction of the axioms of Robinson arithmetic, and let jn(x, y) the recursive function mapping the Godel codes of two formulas onto the code of their conjunction. Diagonalization provides a sentence P such that:

P is arithmetically equivalent to Val(jn(P, Q), #)

We now reason as follows:

1. P & Q Assumption.
2. Q 1, logic.
3. P 1, logic.
4. Val(jn(P, Q), #) 2, 3, logic.
5. Val(P&Q, #) 2, 4, logic.
6. # 1, 5, VS2.
7. Val(P&Q, #) 1 – 6, VS1.
8. Q Assumption
9. Val(jn(P, Q), #) 7, 8, VS2.
10. P 8, 9, logic.
11. P & Q 8, 10, logic.
12. # 9, 11, VS2.

A few observations about the proof:

• We only apply arithmetic (diagonalization in the move from 3 to 4 and from 9 to 10, recursive arithmetic in the move from 4 to 5 and from 7 to nine, since these depend on the arithmetic fact that jn(P, Q) = P&Q) within the scope of an assumption of Q. Thus, the system in which the proof occurs does not assume that arithmetic is valid (or even true).
• The proof does not show that this version of the rules VS1 and VS2 are inconsistent. Instead, it shows that these rules allow one to prove that arithmetic is inconsistent (in this manner, the result is very different from the standard proof of the inconsistency of the T-rules).

Anyway, this is kind of cool. Just as the Liar paradox (or, if you want to be fancy, Tarski's theorem) shows that the T-sentences governing the truth predicate can't be true, this shows that the rules for the logical validity predicate can be true, but can't be logically valid.

References:

Beall, J. & J. Murzi [manuscript], “Two Flavors of Curry Paradox”, online at:

Shapiro, Lionel, , “Deflating Logical Consequence”, Philosophical Quarterly 60(*).

Whittle, B. , Dialetheism, Logical Consequence, and Heirarchy, Analysis 64(4): 318 – 326.

[Edited for readability - rtc]

1. "Just as the Liar paradox (or, if you want to be fancy, Tarski's theorem) shows that the T-sentences governing the truth predicate can't be true, this shows that the rules for the logical validity predicate can be true, but can't be logically valid."

Perhaps we could generalize to:

"The rules for the X-predicate cannot be x." (Where X is a noun and x the corresponding adjective)

Although perhaps surprising, I think it should follow quite straightforwardly from a generalization of paradox-templates relying on diagonalization and some form of self-reference (in the validity cases here, the Godel code for sentence P is mentioned in P itself).

2. Catarina: "Although perhaps surprising, I think it should follow quite straightforwardly from a generalization of paradox-templates relying on diagonalization and some form of self-reference"

Yes, Priest thinks something like this too - I tend to agree. Also, there is Tarski's view that our pre-theoretic concept of "truth" is inconsistent (I agree); and our pre-theoretic concept of "collection" is inconsistent, since we naively think of each predicate determining a collection.

We can either follow the Priest route - acquiesce in the inconsistency (make the logic non-classical when an inconsistent concept is in play) - or follow Tarski, Kripke et al, and restore consistency by introducing refined replacement concepts, perhaps defined explicitly or governed by restricted axioms/rules for which we have a consistency assurance (and we get formal undefinablity results).

3. I tend to agree that a lot of the so-called 'pre-theoretical' concepts we may be interested in might be a mixture of incoherent features. But it may also be a problem with the excessive expressive power of the language, which may be 'abused' in some circumstances (I recall someone developing this point more extensively, but I don't know who anymore). The language is so expressive that we are able to express what cannot be (e.g. construing Godel sentences by diagonalization). By the latter I don't just mean the usual Tarskian approach of banning whatever it is that is causing trouble (paradoxes), but there might be independent arguments on why these are cases of 'excessive' expressivity. In this sense, there may be a rationale for both approaches after all: embracing incoherence and sanitizing the language a bit.

4. I think my own view on the Liar paradox is in line with the sort of view Catarina sketches. In particular, I argue that these paradoxes (like the set-theoretic ones) are the result of illicitly attempting to quantify over an indefinitely extensible collection of things (truth values or sentences in this case). Thus, my own approach would be this: If you think we need a logical validity predicate that treats its own rules as valid, then the rules for this predicate will need to be modified so at any point, it only applies to some definite sub-collection of the indefinitely extensible collection of all sentences. In actuality, though, I think the way to go in this particular case is just to deny that the rules for logical validity are logically valid, as I noted earlier.

You can read about my own view on these things here:

“Embracing Revenge: On the Indefinite Extensibility of Language”, Revenge of the Liar, JC Beall (ed.), Oxford, Oxford University Press, : 31 – 52.

“What is a Truth Value, and How Many Are There?”, Studia Logica 92 (special issue on truth values) : 183 – 201.

5. Hi Roy, suppose you call whatever theory is generating this V.
V is assumed to contain Q and is assumed to satisfy the principles (VS1) and (VS2). Then write a sequent-style derivation with all the dependencies written in:

0. V |- P <-> Val(conj(P,Q), bot)
1. P & Q |- P & Q
2. P & Q |- Q
3. P & Q |- P
4. V, P & Q |- Val(conj(P, Q), bot)
5. V, P & Q |- Val("P&Q", bot)
6. V, P & Q |- bot
7. V |- Val("P&Q", bot)
8. V |- Q
9. V |- Val(conj(P, Q), bot)
10. V |- P
11. V |- P & Q
12. V |- bot

So, we conclude that V is inconsistent. But I'm still not sure how this differs from Beall/Murzi's original version.
The crucial point is at line 7, where the introduction rule is applied. We have a derivation of bot from P&Q in V (which contains arithmetic). We infer that V itself proves Val("P&Q", bot).
But this assumes that reasoning inside V from A to B permits us to conclude Val("A", "B") and the objection is that from A alone one cannot infer B purely in logic. One must use some assumption about Val (which is what one uses to get 6 from 1 and 5).

6. No, this misses the crucial trick. V doesn't include arithmetic - only VS1 and VS2! Here is a slightly different version of the derivation that I hope makes this clear:

1. P & Q |- P & Q
2. P & Q |- P
3. P & Q |- Q
4. P & Q |- Val(conj(P, Q), bot)

[Note: This is the change. 4 is logical if 3 is, since arithmetic - Q - is one of the explicit premises!]

5. P & Q |- Val("P & Q"), bot)

[Similarly, 5 follows logically since arithmetic is an explicit premise]

6. VS1, P & Q |- bot
7. VS1, VS2 |- Val("P & Q", bot)
8. VS1, VS2, Q |- Val(conj(P, Q), bot)
9. VS!, VS2, Q |- P

[Again - you're surely getting the idea - all of these are logical, since the relevant arithmetic and diagonalization facts follow logically from Q]

10. VS1, VS2, Q |- P & Q
11. VS1, VS2, Q |- bot

Note that all entailments are purely logical (on the assumption that VS1 and VS2, but not arithmetic, are logical) and do not depend on arithmetic at all (except in the obvious way when the arithmetic - Q - is explicitly mentioned as (or as part of) a premise or conclusion).

Thus, if the two rules VS1 and VS2 are logical, then this provides us with a logical proof that arithmetic is inconsistent.

Am I missing something?

7. Great! - yes, I understand it better now. The outcome is: Q + V-Out + V-Intro is inconsistent. So, Q is part of the inconsistent theory.
The theory called V in the earlier post was PA + V-Out + V-Intro, and that's inconsistent too of course. I used PA because PA already contains V-Out and V-Intro. But if we take the validity principles as primitives, we can reduce the assumptions needed to the single fixed-point axiom:

(CFP) C <-> Val("C", bot)

(which is a theorem of Q of course). Then the theory CFP + V-Out + V-Intro is inconsistent, following something like JC and Julien's derivation.

1. CFP |- C <-> Val("C", bot)
2. CFP + V-Out |- Val("C", bot) -> (C -> bot)
3. CFP + V-Out |- C -> (C -> bot)
4. CFP + V-Out |- C -> bot
5. CFP + V-Out, C |- bot
6. CFP + V-Out + V-Intro |- Val("C", bot)
7. CFP + V-Out + V-Intro |- C
8. CFP + V-Out + V-Intro |- bot

8. Roy, maybe what I've written is not relevant to what you have in mind. Your application of the introduction rule VS2 is:

6. VS1, P & Q |- bot
7. VS1, VS2 |- Val("P & Q", bot)

So, P&Q gets "absorbed"; so, the main idea is that we have no arithmetic assumptions left in 7?

9. "So, P&Q gets "absorbed"; so, the main idea is that we have no arithmetic assumptions left in 7?"

Exactly. VS1 plus VS2 (plus the assumption that these two rules, but not arithmetic, are logical and thus can be mobilized in subderivations leading to applications of VS2) allow us to prove that arithmetic is inconsistent!

10. But I'm still a bit unclear about why this is so different from what we knew before: PA + validity rules is inconsistent. (In particular, the strong introduction rule, V-Intro. The restricted introduction rule - the right one, in my view - is consistent.)

So, there's something extra here going on which I'm missing. It must be "the assumption that these two rules, but not arithmetic, are logical". This seems to be the central issue, but I'm confused about what is, and what isn't, being assumed. Is someone assuming that the validity principles are logical?

I take a statement to be logically true if it's true in all interpretations of non-logical primitives and something counts as logical when it's invariant under permutations. So, something like Val("A", "B") -> (A -> B) is not a logical truth.

Otherwise, all we need for an inconsistency is the fixed-point biconditional for C, which says "I am inconsistent":

(FP) C <-> Val("C", bot)

Then use a version of JC and Julien's derivation (making clear when we use the various rules):

1. (FP) |- C <-> Val("C", bot)
2. (FP), V-Out |- Val("C", bot) -> (C -> bot)
3. (FP), V-Out |- C -> (C -> bot)
4. (FP), V-Out, C |- bot
5. (FP), V-Out, V-Intro |- Val("C", bot)
6. (FP), V-Out V-Intro |- C
7. (FP), V-Out, V-Intro |- bot

So the strong validity rules contradict just the self-referential biconditional (FP). One doesn't even need arithmetic - although normally we encode syntax in arithmetic. In the restricted case, where we use the weaker introduction rule, and when C means "I am inconsistent", then C comes out as a false but consistent sentence, whose consistency is provable in PA.

11. The point is that the move from 1 - 4 to 5 in your proof is not valid in the system used in my proof. In your system, you need it to be the case that arithmetic (FP, in particular) can be used within derivations that are support for claims about validity. In short, if you can prove it using V-Intro, V-Out, and arithmetic, then it is valid. In my proof, we don't assume this. In order to apply V-intro, we have to be able to prove that the premise entails the conclusion using only V-Out and V-Intro.

To put it in the terms of your first paragraph:

What JC and Julien give us is: If arithmetic, V-Intro, and V-Out are valid, then arithmetic is inconsistent.

What my proof gives us: If V-intro and V-Out are valid, then arithmetic is inconsistent.

So yeah, the interesting issue is whether we can take the Validity principles V-Intro and V-Out to themselves be valid (i.e. logical). My proof shows we can't (JC and Julien's doesn't, since in their proof we can place the blame on the assumption that arithmetic is valid).