In this post, I’m going
to continue my crusade against certain revisionist approaches to logic in
response to paradoxes, which I consider to be insufficiently motivated. I’ve
talked about some of my objections in a few previous posts (here for example), but the main point
is ultimately that any rejection of a rule of inference or structural rule on
the grounds that ‘it leads to paradox’ seems to me to require additional, independent
motivation. Let me hasten to add that I have no particular fondness for
classical logic, and thus my critique is not motivated by general
anti-revisionist inclinations. But I am suspicious of what can be described as
‘fix-up’ solutions, which among other things sidestep the opportunity of
engaging in serious reflection on the exact nature of paradoxical phenomena,
and what they tell us about some of our most basic logical concepts.

Currently, a popular
revisionist strategy is to go substructural: rather than ‘ditching’ one of the
usual rules of inference (say, modus ponens), what has to go is one of the
plausible structural principles underlying (often tacitly) classical logic and many
other logical systems. In particular, recently the rule of

*contraction*has been presented as the paradoxical culprit by several people such as JC Beall, Elia Zardini and Ole Hjorland, who then go on to argue in favor of contraction-free logical systems, or else for restrictions on contraction. Contraction is the rule that says that, if you can infer C from A, A and B, then you can infer C from A and B: the number of occurrences of copies of a given premise is irrelevant for validity. In natural deduction settings, this means that one can assume a given formula as many times as one wants, and then discharge all the assumptions in one go, with just one application of implication introduction.
It is argued that many (if
not all) derivations of paradoxes, such as the Liar or Curry, make crucial use
of multiple discharge, and thus of the structural rule of contraction. See for
example this derivation of Curry’s paradox (borrowed from a draft paper by Ole Hjortland - sorry for the terrible resolution!):

The key point is that, both on
the right branch and on the left branch, two occurrences of the assumption
[T<C>] are discharged with one occurrence on each side of implication
introduction. To be sure, it is certainly an interesting observation that
contraction seems to be involved in all these paradoxical derivations, but is
this enough to justify rejecting and/or restricting contraction with no further
argumentation? I submit that it is not.

What could count as a more robust
argument against contraction, beyond the observation that it seems to be
involved in typical derivations of logical paradoxes such as the Liar and
Curry? As I see it, the question to be asked is:

*what made us think that contraction was a plausible principle in the first place?*If we can go back to the original reasons to endorse contraction and find fault in them, then we might have independent motivation to reject the principle, other than the fact that it seems to be involved in paradoxical derivations. Either way, this is an opportunity to reflect critically on some of the very building blocks of our conception of logic.
Now, if necessary
truth-preservation is both a necessary and sufficient condition for validity,
it is hard to see what could possibly be wrong with contraction: if A, A, B
=> C is a valid consequence, then so is A, B => C, as the collection of
premises {A, A, B} is verified by exactly the same situations as the collection
of premises {A, B}. Similarly, on the dialogical conception of logic that I’ve
been developing recently, once a premise A has been stated and accepted by the
participants in the dialogical game in question, participants (proponent in
particular) can help themselves to A as many times as they wish: it becomes a commonly
owned and permanently available commodity as it were (it doesn’t ‘wear out’).

In my opinion, the thus far most
convincing rejection of contraction is the one of linear logic, which, as a
logic of resources, is sensitive to the number of copies of a given formula
available: once used, the particular copy of the formula is no longer
available. There may be other plausible reasons why a given logic will be
sensitive to resources in this way, e.g. some logics of information. But these
are all independent motivations to reject or restrict contraction, unrelated to
paradoxes.

But specifically with respect to
paradoxes, an interesting avenue to be pursued might be to investigate what
exactly is problematic about the

[UPDATE: Shawn Standefer writes to draw my attention to a 2007 paper by Sue Rogerson in JPL that is highly relevant for the present discussion, 'Natural deduction and Curry's paradox'. Among other things, Rogerson reports (p. 159) that Fitch made a remark very similar to Schroder-Heister's:

*particular**occurrences*of contraction in the paradoxical derivations in question. At the Q&A after Ole’s talk last week at the FLC conference in St. Andrews, Peter Schroeder-Heister made an interesting observation: as it turns out, in every case of multiple discharge in these paradoxical derivations, the two (or more) occurrences of a given formula are heterogeneous in that one is assumed and then undergoes an application, while the other is simply assumed. This can be observed in the derivation above: in both branches, the first assumption of [T<C>] is then ‘developed’ into C, and subsequently into [T<C>] à p (which is by definition what C means), while the second assumption of [T<C>] does not undergo such a process. (As pointed out by Peter Schroeder-Heister, this kind of ‘heterogeneity’ is not observed in occurrences of multiple discharge in the derivation of e.g. the law of excluded middle.) So perhaps there is something problematic about this particular kind of multiple discharge, which is somewhat ‘incestuous’ in that a formula is assumed and then ‘applied’ to itself (or to its descendants) again.[UPDATE: Shawn Standefer writes to draw my attention to a 2007 paper by Sue Rogerson in JPL that is highly relevant for the present discussion, 'Natural deduction and Curry's paradox'. Among other things, Rogerson reports (p. 159) that Fitch made a remark very similar to Schroder-Heister's:

He noticed that there was an unusual feature in the proof of the paradox: Namely, that the same formula is used both as a hypothesis for a subordinate proof and then again later as a minor premise in an application of àE.She then goes on to discuss four different strategies considered by Fitch to deal with the phenomenon. Anyone interested in restrictions to contraction prompted by paradoxes must absolutely read this paper!]

Such observations might lead to
systems which are not contraction-free, strictly speaking, but where
contraction is restricted – hopefully on the basis of some robust principles
rather than an ad-hoc restriction ‘whenever needed’ (i.e. to block paradoxes).
This is indeed what Elia Zardini proposes in his recent RSL paper, namely an
independently motivated restriction of contraction. As I said before, Elia’s
proposal is to my mind the most compelling case for paradox-related contraction
restriction currently in the market.

A final worry for the proponents
of contraction-free systems is the extent of the loss incurred by the rejection
of contraction (a point made by Stephen Read). Without contraction, can we
still prove some of the results that we’ve learned to love and cherish, such as
Cantor’s or Gödel’s? If not, can we live without them? Ultimately, it may well
be that keeping a naïve conception of truth and ending up with an exceedingly
weak logic (if that’s indeed the case) might not be such a good idea after all:
the treatment may be more destructive than the disease itself. In other words:
can we do without contraction, and if yes, should we? At any rate, I for one
need paradox-unrelated reasons to convince me of giving up on contraction.

Hi Catarina,

ReplyDeleteI was also compelled by Schroeder-Heister's comments this weekend and need to investigate those ideas further. I am however still a bit puzzled by how you carve up this territory. Any system which restricts contraction is *de facto* a contraction-free logic since you only need one counterexample to invalidate the law. So I don't think the question is whether contraction is invalid vs. merely restricted: that is something of a false dichotomy. Rather, what we want is something like an illuminating analysis of the conditions under which contraction fails and exactly why that failure is happening. Does that sound like what you want, or is there more to it?

Colin

Hi Colin,

DeleteYes, that is pretty much what I meant. Perhaps I have an atypical understanding of the term 'contraction-free', but my understanding is that this term suggests a total rejection of the rule of contraction across the board, whereas a restrictive approach would at least investigate the conditions under which contraction seems problematic, and possibly identify conditions under which it is not.

But glad to hear you were also compelled by Peter's comment, it's not just crazy anti-revisionist me, hehe...

Just to elaborate on the contrast, I think we should envisage the rejection of contraction as analogous to the intuitionistic rejection of excluded middle. For the intuitionist, some subject matters or domains of discourse are determinate, hence in those contexts it is okay to accept all cases of 'A or not-A' even though it is not okay in general/as a matter of logic. Likewise the advocate of non-contracting logic can admit that in some contexts it is okay to move from ''A,A' entails 'B'' to ''A' entails 'B'' even though it is not okay in general/as a matter of logic.

DeleteI'm totally on board with this; revisionary approaches need more than just "it works" to back them up. In the (somewhat) noncontractive area, I'd point to Ross Brady as someone who's taken this challenge seriously. His theory of consequence as "meaning containment" is, I take it, one way to meet the challenge: the content A might contain the content that A contains B, without the content A actually containing the content B itself. Whatever quarrels one might have with this, it's pretty clearly an independent motivation. (In general, relevant logicians working on contraction-free systems have said interesting things to motivate them, and these things are often independent of paradoxes: much of the philosophy in Relevant Logics and their Rivals 1 is in this vein, for example.)

ReplyDeleteI recall, though, thinking that Elia's motivation in the paper you cite wasn't particularly independent. The idea is that contraction is a certain kind of stability assumption, so only ok when the thing being contracted is indeed stable. But the notion of stability involved looks to me a heck of a lot like "can be contracted on without causing a ruckus". One could develop a further, genuinely independent theory of stability. But without such a theory, I don't think there's much independent motivation at all.

This gets tricky. What counts as independent motivation? Is the meaning containment semantics tractable independent from the formal ends it is meant to serve? None of this is clear to me. It would be nice to have a more concrete idea of what we are looking for in the way of 'independent motivation'.

DeleteWell, I thought Elia at least had a metaphysical discussion of what (in)stability is, so in that sense it went beyond the usual story.

DeleteAnd thanks to the pointer to Brady, will check it out!

But related to what Colin says below: it looks like we do need a discussion of what counts as independent motivation indeed. For now, all I'm asking is for a story which goes beyond simply 'blocking the paradoxes'. So I take it that the linear logic rejection of contraction, for example, counts as 'independent motivation' in this sense, because it has a story on what the logic is supposed to do, and on this story contraction is not a plausible principle, paradoxical or not.

But this is probably going to be a matter of personal taste, at least to some extent, whether something counts as independent motivation or not.

Here's a proposal: a motivation for a logical system is independent (in the way that's relevant here) to the extent that it doesn't appeal to paradox.

ReplyDeleteTacit or implied dependence is then just as possible as tacit or implied appeal: very. And motivations can be more or less independent. So deciding some cases might be hard. But certainly there will be clear cases; people have used things besides paradox to motivate logical systems, even logical systems that are paradox-friendly.

I take the motivation for Ross's favored logic (in the first few chapters of UL, for example) to be a clear case of "independent", in the above sense. No part of the justification depends on "looking ahead" to paradox. (Of course, it's not like the fact that his logic is paradox-friendly would have *surprised* Ross or anything, but that's not the point.)

Note as well his rejection of the T axioms, or his (post-UL) qualification of distribution down to rule-form from axiom-form. These choices are made precisely because the justification given for the target system doesn't extend to axiom-form T or distribution. The situation for paradoxes, though, is no different with or without these, as the nontriviality proof in UL shows.

Myself, I'm not too fussed if part of the justification for a logical system does appeal to paradox; I'd take the demand for a fully independent motivation to be too strong. There's no reason not to learn about logic from paradoxes. But this doesn't get you very far; there's an embarrassment of riches among paradox-friendly logical systems, and it's only independent motivations that will help us choose among these.

Yes to everything :) (Dave, maybe we should co-author a paper on all this? :D )

DeleteMy point is precisely that we can learn about logic from the paradoxes, but only provided that we take them seriously rather than going for fix-up solutions. So the question is, what exactly is fishy about contraction *so that* it leads to paradox (besides the fact that it leads to paradox...)?

Another example, something I've worked on myself: Stephen Read's Bradwardine-inspired account of truth as universal quantification. Besides the fact that it blocks the Liar and other paradoxes (as Stephen has shown), there are independent motivations (which I've discussed in some of my papers) for thinking that truth corresponds to universal quantification and falsity to existential quantification, over the things that a sentence says. (That we end up with a theory of truth that is ineffective is an issue, but perhaps that simply is what truth is, an ineffective notion.)

I'll check out Elia's paper again with critical eyes, now that you tell me you were not that convinced...

I'd definitely be interested. Given all the upsides of noncontractive approaches, I think the demand for independent motivation is pretty much the only plausible argument against them. (And I've got to start arguing against them in print one of these days...)

DeleteWhoops; almost-simulpost!

ReplyDeletePretty much agreed across the board (although I'll likely disagree on just how far Elia's metaphysical turn goes beyond the usual).

I think I was asking about something a bit more ephemeral. I agree that it is natural to say that applying a logic to solve a paradox requires the system to be viable 'qua logic' independently of what it can do for us 'qua solution'. But this only gives us a negative criterion for what we want in the way of semantic characterization of revisionary logic. I agree that Brady's semantics makes no essential appeal to paradox so it meets this benchmark, but I don't think that necessarily makes it illuminating. (FWIW, I don't have a developed view about the strengths or weaknesses of meaning-containment semantics so I leave that topic aside for the remainder)

ReplyDeleteI don't know whether we can articulate robust, positive criteria for what we want in the way of semantic characterization, but I'll give it a shot. It is natural to think of semantic characterization of a logic as a (toy?) theory of meaning of the connectives; it is also natural to think that this should provide us with a means of analyzing validity in other theoretical terms (e.g. 'truth-preservation'). The challenge to any revisionary-logic-solution to paradox is to give a semantic characterization of the background logic on which (i) validity is analyzed in theoretical terms independent of the basic proof-theoretic machinery, (ii) on the given characterization there are definite exceptions to the target classical rule, and (iii) these exceptions make no essential appeal to paradox. This is what, e.g. Zach and I hope to do for contraction failure.

That's a higher standard than I'd want to hold you to; what if it happens that the only failures of contraction fail it precisely in virtue of being paradoxical?

DeleteI'd think that the following would be enough: 1) justifying, without appeal to paradox, an story about validity that *leaves open* whether contraction (say) always preserves validity, and then 2) giving a plausible theory of paradox that shows why, given the story in 1, paradoxes ensure failures of contraction.

Fair enough, in practice I'd be happy with that as well.

DeleteIt seems to me difficult to discuss this topic without being clear about what validity means. One can say an argument is valid if its conclusions follow from its premises, or one can say that an argument is valid if the truth of the premises entails the truth of the conclusions. The two do not seem to me to be equivalent (I would assert the former, not the latter.)

ReplyDeleteFor instance IMHO, "From A, infer A or B" is valid in the first sense, not valid in the latter. Specifically, let "xxx" abbreviate "not true and not false." Then A is not true if and only if A is false or A is xxx. Consider:

(A) "C is false"

(B) "C is xxx"

(C) A or B, i.e. "C is false or C is xxx", i.e. "C is not true"

Then C follows from B. But I would not assert that from the truth of B, one can conclude the truth of C. B is true, but C is not true.

I added an update in the post to refer to a paper by Sue Rogerson that is highly relevant for this discussion. Thanks to Shawn Standefer for the pointer!

ReplyDelete