A well-known phenomenon in the empirical study of human reasoning is the so-called

*Modus Ponens-Modus Tollens*asymmetry. In reasoning experiments, participants almost invariably ‘do well’ with MP (or at least something that looks like MP – see below), but the rate for MT success drops considerably (from almost 100% for MP to around 70% for MT – Schroyens and Schaeken 2003). As a result, any theory purporting to describe human reasoning accurately must account for this asymmetry. Now, given that for classical logic (and other non-classical systems) MP and MT are equally valid, plain vanilla classical logic fails rather miserably in this respect.

As noted by Oaksford and Chater (‘Probability logic and the

*Modus Ponens-Modus*

*Tollens*asymmetry in conditional inference’, in this 2008 book), some theories of human reasoning (mental rules, mental models) explain the asymmetry at what is known as the algorithmic level (a terminology proposed by Marr (1982)) – that is, in terms of the mental process that (purportedly) implement deductive reasoning in a human mind. So according to these theories, performing MT is harder than performing MP (for a variety of reasons), which is why reasoners, while still trying to reason deductively, have difficulties with MT. Other theorists defend that participants are not in fact trying to reason deductively at all, so the asymmetry is not related to some presumed competence-performance gap. (Marr’s term to refer to the general goal of the processes, rather than the processes themselves, is ‘computational level’ – the terminology is somewhat unnatural, but it has now become standard.) Oaksford and Chater are among those favoring an analysis at the computational level, in their case proposing a Bayesian, probabilistic account of human reasoning as a normative theory not only explaining but also

*justifying*the asymmetry.

In the reasoning literature, most of those who have rejected the so-called ‘deduction paradigm’ have gone probabilistic. One exception is the work of Stenning and van Lambalgen (2008), who take a (qualitative) non-monotonic logic known as closed-world reasoning as their starting point to investigate human reasoning both at the algorithmic and the computational levels. For reasons that are too convoluted to go into here, I am not entirely satisfied with either the probabilistic approach or the closed-world reasoning approach. I like the idea of non-monotonic logics and the qualitative perspective they offer, but Stenning and van Lambalgen’s approach is too ‘syntactic’ to my taste. Instead, I’ve been working with the semantic approach to non-monotonic logics originally introduced by Shoham (1987) and further developed in the classic (Kraus, Lehmann and Magidor 1990) to account for a number of psychological phenomena pertaining to reasoning, such as so-called reasoning biases (see slides of a talk here) and now the MP-MT asymmetry. This group of theories is often referred to as ‘preferential logics’. (It is worth noting that on many but not all non-monotonic logics, MP holds but MT does not.) I take the semantic approach of preferential logics to be not only technically useful to study reasoning phenomena, but also to be descriptively plausible. (And here is a little plug: Hanti Lin is doing amazing theoretical work with a similar framework.)

Shoham’s semantic approach to non-monotonic logics is beautifully simple: take a standard monotonic logic L and define a strict partial order for the models M of L, which is viewed as a defining a preference relation: M1 < M2 means that M2 is preferred over M1. The non-monotonic relation of consequence then becomes:

A => B iff all

*preferred*models of A are models of B.
This relation is non-monotonic because A & C may have
preferred models that are not preferred models of A alone. So with the addition
of C, it may no longer be the case that B holds in all preferred models of A
& C, even if it is the case that B holds in all preferred models of A
alone.

What about MP and MT? If the conditional is given a
defeasible interpretation corresponding to the defeasible consequence relation as
defined above, then what we have is not the classical version of MP, which
allows for no exceptions (no cases where the antecedent is the case but not the
consequent), but rather something that could be described as defeasible MP (a
terminology used for example by D. Walton and collaborators). Do we obtain a
defeasible MT as well?Now, the first thing to notice is that the preferential consequence relation has a built-in asymmetry: it refers to the

*preferred*models of A, but to

*all*models of B. So this consequence relation does not contrapose: assuming that, for all models M and all propositions P, either P or not-P holds in M, for the relation to be contrapositive it would be required that all preferred models of not-B are also models (preferred or otherwise) of not-A. But there may well be non-preferred models of A which are also (preferred) models of not-B. Thus, the definition is not satisfied for not-B => not-A. By the same token, MT does not hold, and the MP-MT asymmetry is both explained and justified.

In fact, adopting this framework also suggests that the sky-high rate of success with MP in experiments may not be an indication that participants are in fact reasoning deductively-indefeasibly. This is because in the case of MP, the defeasible and the indefeasible responses coincide; in the case of MT, however, the responses come apart, suggesting that at least some of the participants (those who do MP but not MT) may be reasoning defeasibly all along.

Hello,

ReplyDeleteI have few questions. What is the difference between Shoham's semantics and conditional logics? With possible worlds, we obtain almost the same truth-conditions for the conditional:

A => B iff all closest worlds where A is true are worlds where B is true

Also, in Adam's semantics, MP holds but not the contraposition. What are the advantages of Shoham's theory comparatively to the approaches of conditional logic with possible worlds or probabilities?

There are a lot of similarities between the two frameworks, and for the reason you point out: the notions of 'closest worlds' and 'preferred/minimal models' are indeed very similar. But the general purposes of each of the frameworks are different: non-monotonic logics aim at capturing defeasible patterns of reasoning, while semantic theories of conditionals are, well, semantic theories of conditionals! But some people also move a bit back and forth between the frameworks, e.g. the work of my former colleague Frank Veltman.

DeleteActually, see section 5.7 of the SEP entry on defeasible reasoning: "Lehmann and Magidor (Lehmann and Magidor 1992) noticed an interesting coincidence: the metalogical conditions for preferential consequence relations correspond exactly to the axioms for a logic of conditionals developed by Ernest W. Adams (Adams 1975)."

DeleteSo there you go :)

Thanks for the reference. This means that from a formal point of view, defeasible and Adam's probabilist semantics obtain the same results.

DeleteI'm trying to figure out whether a psychological experiment can discriminate between both theories? Anyway, probabilist predictions are more easily comparable with statistical data issued from psychological experiences. It explains perhaps their success in the psychology of reasoning.

(Kraus, Lehmann and Magidor 1990) also has a whole section on comparing the two frameworks.

DeleteI'm not sure whether it makes sense to try to discriminate between both theories empirically, precisely because they are meant to be theories about different things. The remarkable thing is rather that they ended up converging.

A more general question is how probabilistic frameworks relate to non-probabilistic frameworks, when both aim to explain the same phenomena. So for example, Oaksford and Chater's application of Bayesian probability to psychological phenomena would be a true competitor to my claim that preferential logic offers a good description of reasoning phenomena. But in that case, my guess is that there would be non-trivial disagreement on the predictions made by each theory.