A well-known phenomenon in the empirical study of human reasoning is the so-called Modus Ponens-Modus Tollens asymmetry. In reasoning experiments, participants almost invariably ‘do well’ with MP (or at least something that looks like MP – see below), but the rate for MT success drops considerably (from almost 100% for MP to around 70% for MT – Schroyens and Schaeken 2003). As a result, any theory purporting to describe human reasoning accurately must account for this asymmetry. Now, given that for classical logic (and other non-classical systems) MP and MT are equally valid, plain vanilla classical logic fails rather miserably in this respect.
As noted by Oaksford and Chater (‘Probability logic and the Modus Ponens-Modus Tollens asymmetry in conditional inference’, in this 2008 book), some theories of human reasoning (mental rules, mental models) explain the asymmetry at what is known as the algorithmic level (a terminology proposed by Marr (1982)) – that is, in terms of the mental process that (purportedly) implement deductive reasoning in a human mind. So according to these theories, performing MT is harder than performing MP (for a variety of reasons), which is why reasoners, while still trying to reason deductively, have difficulties with MT. Other theorists defend that participants are not in fact trying to reason deductively at all, so the asymmetry is not related to some presumed competence-performance gap. (Marr’s term to refer to the general goal of the processes, rather than the processes themselves, is ‘computational level’ – the terminology is somewhat unnatural, but it has now become standard.) Oaksford and Chater are among those favoring an analysis at the computational level, in their case proposing a Bayesian, probabilistic account of human reasoning as a normative theory not only explaining but also justifying the asymmetry.
In the reasoning literature, most of those who have rejected the so-called ‘deduction paradigm’ have gone probabilistic. One exception is the work of Stenning and van Lambalgen (2008), who take a (qualitative) non-monotonic logic known as closed-world reasoning as their starting point to investigate human reasoning both at the algorithmic and the computational levels. For reasons that are too convoluted to go into here, I am not entirely satisfied with either the probabilistic approach or the closed-world reasoning approach. I like the idea of non-monotonic logics and the qualitative perspective they offer, but Stenning and van Lambalgen’s approach is too ‘syntactic’ to my taste. Instead, I’ve been working with the semantic approach to non-monotonic logics originally introduced by Shoham (1987) and further developed in the classic (Kraus, Lehmann and Magidor 1990) to account for a number of psychological phenomena pertaining to reasoning, such as so-called reasoning biases (see slides of a talk here) and now the MP-MT asymmetry. This group of theories is often referred to as ‘preferential logics’. (It is worth noting that on many but not all non-monotonic logics, MP holds but MT does not.) I take the semantic approach of preferential logics to be not only technically useful to study reasoning phenomena, but also to be descriptively plausible. (And here is a little plug: Hanti Lin is doing amazing theoretical work with a similar framework.)
Shoham’s semantic approach to non-monotonic logics is beautifully simple: take a standard monotonic logic L and define a strict partial order for the models M of L, which is viewed as a defining a preference relation: M1 < M2 means that M2 is preferred over M1. The non-monotonic relation of consequence then becomes:
A => B iff all preferred models of A are models of B.
This relation is non-monotonic because A & C may have preferred models that are not preferred models of A alone. So with the addition of C, it may no longer be the case that B holds in all preferred models of A & C, even if it is the case that B holds in all preferred models of A alone.What about MP and MT? If the conditional is given a defeasible interpretation corresponding to the defeasible consequence relation as defined above, then what we have is not the classical version of MP, which allows for no exceptions (no cases where the antecedent is the case but not the consequent), but rather something that could be described as defeasible MP (a terminology used for example by D. Walton and collaborators). Do we obtain a defeasible MT as well?
Now, the first thing to notice is that the preferential consequence relation has a built-in asymmetry: it refers to the preferred models of A, but to all models of B. So this consequence relation does not contrapose: assuming that, for all models M and all propositions P, either P or not-P holds in M, for the relation to be contrapositive it would be required that all preferred models of not-B are also models (preferred or otherwise) of not-A. But there may well be non-preferred models of A which are also (preferred) models of not-B. Thus, the definition is not satisfied for not-B => not-A. By the same token, MT does not hold, and the MP-MT asymmetry is both explained and justified.
In fact, adopting this framework also suggests that the sky-high rate of success with MP in experiments may not be an indication that participants are in fact reasoning deductively-indefeasibly. This is because in the case of MP, the defeasible and the indefeasible responses coincide; in the case of MT, however, the responses come apart, suggesting that at least some of the participants (those who do MP but not MT) may be reasoning defeasibly all along.