Preferential logics, supraclassicality, and human reasoning

(Cross-posted at NewAPPS)

Some time ago, I wrote a blog post defending the idea that a particular family of non-monotonic logics, called preferential logics, offered the resources to explain a number of empirical findings about human reasoning, as experimentally established. (To be clear: I am here adopting a purely descriptive perspective and leaving thorny normative questions aside. Naturally, formal models of rationality also typically include normative claims about human cognition.)  

In particular, I claimed that preferential logics could explain what is known as the modus ponens-modus tollens asymmetry, i.e. the fact that in experiments, participants will readily reason following the modus ponens principle, but tend to ‘fail’ quite miserably with modus tollens reasoning – even though these are equivalent according to classical as well as many non-classical logics. I also defended (e.g. at a number of talks, including one at the Munich Center for Mathematical Philosophy which is immortalized in video here and here) that preferential logics could be applied to another well-known, robust psychological phenomenon, namely what is known as belief bias. Belief bias is the tendency that human reasoners seem to have to let the believability of a conclusion guide both their evaluation and production of arguments, rather than the validity of the argument as such.

Well, I am now officially taking most of it back (and mostly thanks to working on these issues with my student Herman Veluwenkamp).

Already at the Q&A of my talk at the MCMP, it became obvious that preferential logics would not work, at least not in a straightforward way, to explain the modus ponens-modus tollens asymmetry (in other words: Hannes Leitgeb tore this claim to pieces at Q&A, which luckily for me is not included in the video!). As it turns out, it is not even obvious how to conceptualize modus ponens and modus tollens in preferential logics, but in any case a big red flag is the fact that preferential logics are supraclassical, i.e. they validate all inferences validated by classical logic, and a few more (i.e. there are arguments that are valid according to preferential logics but not according to classical logic, but not the other way round). And so, since classical logic sanctions modus tollens, then preferential logics will sanction at least something that looks very much like modus tollens. (But contraposition still fails.)

In fact, I later discovered that this is only the tip of the iceberg: the supraclassicality of preferential logics (and other non-monotonic systems) becomes a real obstacle when it comes to explaining a very large and significant portion of experimental results on human reasoning. In effect, we can distinguish two main tendencies in these results:
  •       Overgeneration: participants endorse or produce arguments that are not valid according to classical logic.
  •       Undergeneration: participants fail to endorse or produce arguments that are valid according to classical logic.

For example, participants tend to endorse arguments that are not valid according to classical logic, but which have a highly believable conclusion (overgeneration). But they also tend to reject arguments that are valid according to classical logic, but which have a highly unbelievable conclusion (undergeneration). (Another example of undergeneration would be the tendency to ‘fail’ modus tollens-like arguments.) And yet, overgeneration and undergeneration related to (un)believability of the conclusion are arguably two phenomena stemming from the same source, so to speak: our tendency towards what I call ‘doxastic conservativeness’, or less pedantically, our aversion to changing our minds and revising our beliefs.

Now, if we want to explain both undergeneration and overgeneration within one and the same formal system, we seem to have a real problem with the logics available in the market. Logics that are strictly subclassical, i.e. which do not sanction some classically valid arguments but also do not sanction anything classically invalid (such as intuitionistic or relevant logics), will be unable to account for overgeneration. Logics that are strictly supraclassical, i.e. which sanction everything that classical logic sanctions and some more (such as preferential logics), will be unable to account for undergeneration. (To be fair, preferential logics do work quite well to account for overgeneration.)

So it seems that something quite radically different would be required, a system which both undergenerates and undergenerates with respect to classical logic. At this point, my best bet (and here, thanks again to my student Herman) are some specific versions of belief revision theory, more specifically what is known as non-prioritized belief revision. The idea is that incoming new information does not automatically get added to one’s belief set; it may be rejected if it conflicts too much with prior beliefs (whereas the original AGM belief revision theory includes the postulate of Success, i.e. new information is always accepted). This is a powerful insight, and in my opinion precisely what goes on in the cases of belief bias-induced undergeneration: participants in fact do not really take the false premises as if they were true, which then leads them to reject the counterintuitive conclusions that do follow deductively from the premises offered. (See also this paper of mine which discusses the cognitive challenges with accepting premises ‘at face value’ for the purposes of reasoning.)


In other words, what needs to be conceptualized when discussing human reasoning is not only how reasoners infer conclusions from prior belief, but also how reasoners accept new beliefs and revise (or not!) their prior beliefs. Now, the issue seems to be that logics, as they are typically understood (and not only classical logic), do not have the resources to conceptualize this crucial aspect of reasoning processes – a point already made almost 30 years ago by Gilbert Harman in Change in View. And thus (much as it pains me to say so, being a logically-trained person and all), it does look like we are better off adopting alternative general frameworks to analyze human reasoning and cognition, namely frameworks that are able to problematize what happens when new information arrives. (Belief revision is a possible candidate, as is Bayesian probabilistic theory.)

Comments

  1. Another avenue that you might look into is whether something like Neil Tennant's extensions of Core Logic (which is sort of a combination of intuitionistic and relevant logic) into belief revision might get both your under-/over-generation issues. This would stay true to your original method, but possibly skirt the Leitgeb-style objections.

    ReplyDelete
  2. I am not a specialist of the field but are there some kinds of bayesian logics? Say you attribute a credence probability to all beliefs and replace modus ponens with something like: p(b)=p(b|a)p(a).
    Modus ponens and modus tollens could yield different result because thay are based on different conditional probabilities (for example I could strongly believe that all swans are white but not be quite sure that all non-white things are non-swans). It's just an idea, I am not certain it works under rational constraints on probabilities (intuitively I would probably rather say that my credence that all non-white things are non-swans is undefined... )

    I was just wondering if this kind of theories exist. It seems to me that most of the time we are performing probabilistic inferences rather than pure deduction, and that logic is more like a special limiting case when probabilities are all 0 and 1.

    ReplyDelete
  3. Now, if we want to explain both undergeneration and overgeneration within one and the same formal system, we seem to have a real problem with the logics available in the market. buy pakistani lawn suits wholesale , lawn collection 2016 online shopping ,

    ReplyDelete

Post a Comment