Thursday, 30 June 2011
Should math be taught in schools?
Wednesday, 29 June 2011
Report on Extended Cognition Workshop
In the 'report on conferences' spirit, I've just written a report on the Extended Cognition Workshop which I organized and which took place in
Tuesday, 28 June 2011
Report on conferences -- Guest post by Shawn Standefer
There were several presentations motivating substructural logics in response to the paradoxes. Julien Murzi presented some criticisms of Field's recent work based on its inability to contain an adequate validity predicate. This was, I think, based on joint work with JC Beall. Murzi used this to motivate rejecting the structural rule of contraction in addition to the rule of contraction for the conditional. Elia Zardini motivated a contraction-free logic based on paradoxes that arise from adding predicates for naive logical properties, such consistency and inconsistency, that obey simple inference rules. Zardini has obtained some interesting metatheoretic results concerning the validity relation for his logic using a non-classical metalanguage. One point that was pushed in discussion was that there are difficulties of making philosophical and semantic sense of the failure of structural contraction.
Joint work by Pablo Cobreros, Paul Egre, David Ripley, and Robert van Rooij was presented at both conferences. They used work on vagueness to motivate a non-transitive logic that can be given a surprisingly clean, surprisingly classical proof theory. Their
There was more good work on proof theory presented in
There was a talk at each conference responding to Leitgeb's work on the paradoxes. In
Both conferences featured discussion of Stephen Read's work on Bradwardine's theory of truth. In
Michael Glanzberg gave interesting talks at both conferences on two different approaches to truth: the complex and the simple. The former sees truth as substantive and embraces hierarchies of truth but no substantive notions of determinateness. The latter takes a deflationary view of truth and embraces notions of determinateness but rejects hierarchies of truth. Glanzberg presented some technical results concerning iteration and reflection, including some analysis of the complexity involved in long iterations found in different approaches.
Anil Gupta presented talks at both conferences on the role of the T-sentences in theories of truth, focusing on what sort of conditional should be used in them. Gupta initially motivated the analysis of the T-sentences by framing a logical problem, the sorting of good arguments from bad. He used this to motivate a new conditional for the revision theory and showed how it can be used to address some of the initial problems he raised.
In
In
In
Riccardo Bruni gave a great talk on a sequent calculus presentation for the calculus for finite definitions from the Revision Theory book. He introduced a special conditional, and he sketched how to prove an elimination theorem for the calculus with the definitional rules, classical logic on indexed formulas, and his new conditional. Stefan Wintein argued that Philip Kremer's formalization of the Gupta-Belnap criterion of vicious circularity did not adequately respect the original intuitions of the informal idea. Wintein then proposed a new version that focused on T-sentences. Wintein's version was, I believe, slightly weaker than Kremer's, as the two come apart in the case of Wintein's generalized strong Kleene schemes.
Graham Leach-Krouse gave an excellent talk on the surprise examination paradox. Leach-Krouse showed how to formalize the reasoning involved in the paradox and then presented a generalization of it. The generalized version leads to Solovay's theorem connecting provability logic and its arithmetic interpretation. Leach-Krouse used this fact to argue against Ramsey's famous distinction between the logical and the linguistic paradoxes. Great stuff.
There were a few other talks at the conferences, but I either cannot find my notes for them or do not have good notes for them. In
In
I will close by mentioning two things that came up in discussion in different talks that I want to highlight. The first is the complexity of theories of truth. This was mentioned by several people. There are some nice results about the complexity of different approaches, even of straightforward disquotational theories. Some people took these results as strong reasons against adopting different theories. What should we make of these arguments?
The second thing has to do with the philosophy of logic. There seemed to be some important questions about how we are supposed to think of logic in the context of truth. There was a divide over whether classical logic should be privileged, what this means, and if it should be privileged, why. This may, however, just be a recasting of old debates about the status of non-classical logic. The wealth of material in the area of theories of truth seems promising as a way of shedding new light on those debates.
Saturday, 25 June 2011
Roy's Fortnightly Puzzle: Volume 5
Friday, 24 June 2011
A song of love and logic
You’d both be and not be mine"
(listen)
"The 21st Century Monads are an international musical collaboration whose songs address fundamental issues in philosophy, including specialized topics in contemporary analytic philosophy and the history of philosophy. The musical genres range from dance to folk. The songs are unique, original songs, not cheesy parodies."
Monday, 20 June 2011
The Law of Permutation and Paracompleteness
$A \rightarrow ((A \rightarrow B) \rightarrow B)$
From A12, together with another law that Field's algebraic semantics validates, conjunctive syllogism (CS),
$ ((A \rightarrow B) \land (B \rightarrow C)) \rightarrow (A \rightarrow C)$
Field shows that we can derive the law of contraction (W):
$(A \rightarrow (A \rightarrow B)) \rightarrow (A \rightarrow B)$
This leads to disaster since most systems, including Field's, cannot have (W) on pain of the Curry Paradox. Field's diagnosis is that we have no other choice than to reject (A12), and thus the law of permutation (C) with it. The cost is high, but Field suggests that it would be worse to give up the sensible looking (CS). After all, (CS) simply looks like a law recording the transitivity of the conditional, a property we definitely want to preserve.
Nevertheless, I think that this line of reasoning is not entirely satisfactory. If we instead give up (CS), we might be able to maintain both (C) and the equivalence with A12. As far as intuitions go, I suppose that (C) is a pretty natural looking fellow too. Indeed, I'm not sure what would constitute a tie breaker between (CS) and (C). But that aside, there is another consideration in the favour of giving up (CS) rather than (C). (C) is the conditional law characteristic of structural constraction, i.e.
Update: Shawn Standefer made some very valuable comments. First, A10 and A11 above only hold in the rule form, not as axioms. Second, although I wasn't aware of this, it looks like (CS) isn't valid in the semantics has in the book Saving Truth From Paradox, although it does hold in semantics the of his earlier JPL paper.
PhD position in Konstanz and how to get women to apply for jobs
Over the weekend, Franz Huber (
The Formal Epistemology Research Group invites applications for a PhD position in Theoretical Philosophy for an initial period of two years, starting October 1, 2011, or some date agreed upon. The position is subject to the positive evaluation of an interim report. Applications should include at least two letters of reference as well as a description of the dissertation project and/or a writing sample. They should be sent to: formal.epistemology@uni-konstanz.de by July 31, 2011.
Here is the link to the Formal Epistemology Research Group in
Besides advertising the position here as one for which women are particularly encouraged to apply, I’d like to offer some considerations on ‘gender differences’ concerning matters such as applying for jobs. I know the readership of this blog is overwhelmingly male (it’s just the demographics of the area), and it is pretty clear to me that men in general have no idea of what goes on when a woman sees a job ad which might be suitable for her. Typically, a woman’s first reaction is often to think that she is not suitable, that she does not fit the profile; in short, that she is not ‘good enough’. This phenomenon has been widely documented by studies in social psychology, and is certainly not restricted to PhD positions or academic jobs more generally. Typically, if there is a list of five items in the job description and a woman satisfies 4,8 of them, she will think she has no business applying; by contrast, if a man satisfies 3 of them, he already feels he should apply. There are all kinds of reasons why this is so, none of which entails gender essentialism; it is simply a consequence of how women’s potentials are perceived throughout their lives and of the fact that they have internalized a general feeling of inadequacy.
What I mean to say by all this is that if you, fairly senior male researcher, know of a suitable female candidate for this position, or any other position, she will typically need much more support and encouragement to go and apply than a male candidate. You will need to tell her in all words that yes, she’s very talented, that she has what it takes for the job, that she should definitely apply. In fact, this holds also for women who have already taken up positions in environments that are predominantly (or overwhelmingly) male in their composition; such women are much more likely to feel inadequate and ‘not good enough’ from the mere fact that they are one of the very few women around. (This is essentially related to all kinds of unconscious mechanisms usually referred to as ‘implicit biases’, also widely documented in the psychology literature.)
The bottom-line is: as long as a given area is marked by serious gender imbalance, it is much harder to be working in the said area if you belong to the under-represented gender (and similar considerations hold for any other minority as well), simply because the feeling of inadequacy is constantly looming large. So it is only fair that the people in the under-represented group (women, in our case here) should receive additional support and encouragement, as they are swimming against the current non-stop. (Of course, there are a few women who somehow manage to neutralize the swimming-against-the-current effect, but this doesn't invalidate the point that the effect is there alright.) It would be very nice if senior people, both male and female, could keep this in mind when coaching their students and younger colleagues.
Thursday, 16 June 2011
Prizes and lectures in London
A while back, the decision for another London prize, the 2010 Lakatos Award in Philosophy of Science, was announced by the LSE. The Prize has gone to Peter Godfrey-Smith (Harvard), for his book Darwinian Populations and Natural Selection (Oxford University Press, 2009), and Godfrey-Smith also gave his public lecture in London a couple of weeks ago. A mp3 can be found at this page.
Wednesday, 15 June 2011
Mercier and Sperber on the origins of reasoning
Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. Reasoning so conceived is adaptive given the exceptional dependence of humans on communication and their vulnerability to misinformation. A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis. Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views. This explains the notorious confirmation bias. This bias is apparent not only when people are actually arguing but also when they are reasoning proactively from the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow erroneous beliefs to persist. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: Look for arguments that support a given conclusion, and, ceteris paribus, favor conclusions for which arguments can be found.I have a lot of sympathy for this approach, but ultimately I think it is flawed, as I argue in my post at NewAPPS.
Monday, 13 June 2011
Roy's Fortnightly Puzzle: Volume 4
Thursday, 9 June 2011
Latest news on the 'inconsistency of PA' affair
UPDATE: I'm adding a few small changes to the original post, which were suggested by Steve Awodey.
(Cross-posted at NewAPPS.)
As previously reported (here and here), over the last few weeks there has been a heated debate within the foundations of mathematics community on what appeared to be a controversial statement by Fields medalist V. Voevodsky: the inconsistency of PA is an open problem. But what seemed at first instance a rather astonishing statement has become more transparent over the last couple of days, in particular thanks to posts by Bill Tait and Steve Awodey over at the FOM list. (Moreover, I’ve had the pleasure of discussing the matter with Steve Awodey in person here in
First, it might be useful to clarify some of the basic points of the program Voevodsky and others are engaged in. A few years back, Awodey established a neat correspondence between homotopy theory (Voevodsky’s main topic of research) and Martin-Löf’s constructive type theory. The result then led to the project of providing foundations for mathematics within the newly founded framework of Homotopy Type Theory and Univalent Foundations. One of the distinctive characteristics of the framework is that it is based on the notion of a continuum (homotopy is a branch of topology), while the more familiar set-theoretical framework is based on the notion of discrete elements (sets). Moreover, constructive type-theory is, well, constructive, so it is much more straightforward to maintain ‘epistemic control’ over proofs. At the time that Awodey’s correspondence results became known, independently, Voevodsky had also been interested in related ideas, and they are now actively collaborating on it (Martin-Löf is also involved, as is Thierry Coquand). Crucially, given its very distinctive starting point (continuity vs. discreteness) and its constructive nature, the framework could provide a truly revolutionary approach to the foundations of mathematics.
Now, within the bigger picture of things, the consistency of PA is actually a tangential, secondary issue. Voevodsky’s seemingly polemic statement concerning the potential inconsistency of PA in fact seems to amount to the following: all the currently available proofs of the consistency of PA in fact rely on the very claim they prove, namely the consistency of PA, on the meta-level. (S. Awodey adds that it is a consequence of Gödel's results that all proofs of consistency of PA must use methods that are stronger than PA in the meta-language.) So if PA was inconsistent, these proofs would still go through; in other words, there is a sense in which such proofs are circular in that they presuppose the very fact that they seek to prove. I raised a similar point before in comments here: if PA was unsound (which it would be if it were inconsistent), it might be able prove its consistency even if it is actually inconsistent (which also means that Hilbert’s original goal may have had a suspicious circular component all along). Now, we know by Gödel that PA cannot prove its own consistency, but the proofs of the consistency of PA available all seem to presuppose the consistency of PA (on the meta-level), so it boils down to roughly the same situation of epistemic uncertainty. Bill Tait sums up this general point in his latest message to FOM:
I understand him [Voevodsky] to be saying the following: PA is just a formal system and if it is inconsistent, then none of its formal deductions are reliable: false things can be proved without an explicit inconsistency appearing. He wants to say that this will not happen in type theory: the 'proofs' of a sentence \phi in type theory are not just strings of symbols, but are objects of type \phi that we actually construct and we can check that we have made a proper construction of such an object (a pair, a function, etc.). So even if there is an inconsistency in the large, individual proofs in type theory can be checked for reliability. For example, to see that a particular term (proof) t of type \forall x \phi(x) really yields a function of that type, one need 'only' to show that tn computes to an object of type \phi(n) for each number n.
So it would seem that the main conclusion to be drawn from these debates, and as pointed out by Steve Awodey in one of his messages to FOM, is that the consistency of PA is really a minor, secondary issue within a much broader and much more ambitious new approach to the foundations of mathematics. Time will tell what will come out of it, but given the brilliancy of the people involved in the project and its innovative character, we can all look forward to the results to come. For those who would like to know more and keep track of the project, their website is:
Saturday, 4 June 2011
Science map
Cool, no?

(From Wired Science, where much more can be found.)
Wednesday, 1 June 2011
Explanation in Mathematics (Paolo Mancosu, SEP)
[Hat-tip to Chris Pincock at Honest Toil]
Intuitions Regarding Geometry Are Universal, Study Suggests
All human beings may have the ability to understand elementary geometry, independently of their culture or their level of education. This is the conclusion of a study carried out by CNRS, Inserm, CEA, the Collège de France, Harvard University and Paris Descartes, Paris-Sud 11 and Paris 8 universities. It was conducted on Amazonian Indians living in an isolated area, who had not studied geometry at school and whose language contains little geometric vocabulary. Their intuitive understanding of elementary geometric concepts was compared with that of populations who, on the contrary, had been taught geometry at school. The researchers were able to demonstrate that all human beings may have the ability of demonstrating geometric intuition. This ability may however only emerge from the age of 6-7 years. It could be innate or instead acquired at an early age when children become aware of the space that surrounds them. This work is published in the PNAS.The article is:
Véronique Izard, Pierre Pica, Elizabeth S. Spelke, and Stanislas Dehaene. 2011. Flexible intuitions of Euclidean geometry in an Amazonian indigene group. Proceedings of the National Academy of Sciences, 23 May 2011.