Friday, 11 April 2014

CfP: Agent-Based Modeling in Philosophy

LMU Munich
11-13 December 2014
www.lmu.de/abmp2014

In the past two decades, agent-based models (ABMs) have become ubiquitous in philosophy and various sciences.  ABMs have been applied, for example, to study the evolution of norms and language, to understand migration patterns of past civilizations, to investigate how population levels change in ecosystems over time, and more.  In contrast with classical economic models or population-level models in biology, ABMs are praised for their lack of assumptions and their flexibility.  Nonetheless, many of the methodological and epistemological questions raised by ABMs have yet to be fully articulated and answered.  For example, there are unresolved debates about how to test (or "validate") ABMs, about the scope of their applicability in philosophy and the sciences, and about their implications or our understanding of reduction, emergence, and complexity in the sciences.  This conference aims to bring together an interdisciplinary group of researchers aimed at understanding the foundations of agent-based modeling and how the practice can inform and be informed by philosophy.

Topics of the conference will include, but will not be limited to:

  • Advantages and disadvantages of agent-based models in relation to classical economic and biological models
  • Testing and/or "validating" agent-based models
  • How agent-based models inform discussions of reduction and/or emergence in the sciences
  • Agent-based models and complexity
  • Applications of ABMs in philosophy, which may include, but is not limited to, investigating the evolution of norms and/or language, or the study of dynamics of scientific communities and theory/paradigm change

We invite submissions of extended abstracts of 750-1000 words for contributed talks by 1 June 2014. Decisions will be made by 15 June 2014.

KEYNOTE SPEAKERS: Jason Alexander (LSE), Rosaria Conte (Rome), Scott Page (Michigan), Michael Weisberg (Penn), and Kevin Zollman (SMU)

ORGANIZERS: Lee Elkin, Stephan Hartmann, Conor Mayo-Wilson, and Gregory Wheeler

Thursday, 10 April 2014

More Thoughts on Constructing the World (David Chalmers)

With permission, I'm posting some of David Chalmers' quick thoughts/responses to Panu Raatikainen's critical notice of David's recent aufbauesque (2012) book, Constructing the World (some lectures on this are here on youtube):
---------------------
(1) Are bridge laws allowed in the scrutability base, and if so does this trivialize scrutability theses?
Bridge laws are certainly not disallowed from the base in general (indeed, I'd have psychophysical bridge laws in my own base). When I said that bridge laws were not allowed in the base, I was discussing a specific scrutability thesis: microphysical scrutability (where the base must be microphysical truths alone). On the other hand, building in separate bridge laws for water, kangaroos, and everything else will lead to a non-compact scrutability base. So there's no trivialization of the central compact scrutability thesis here.
(2) Is Carnap's $\omega$-rule powerful enough to yield scrutability of mathematical truths?
My discussion of the $\omega$ rule is intended to illustrate my response to the godelian objection to the scrutability of mathematical truths, rather than a general account of the knowability of mathematical truths. It's an example of an idealized infinitary process that can get around godelian limitations. The $\omega$-rule suffices to settle first-order arithmetical truths but of course other infinitary methods will be needed in other domains. It's just false that inference rules assume the knowability of their premises, so there's no trivialization here.
(3) Is there a circularity in nomic truths being scrutable from microphysical truths and vice versa?
If one distinguishes ramsified and nonramsified versions of microphysical truths, any apparent circularity disappears. non-ramsified microphysical truths are scrutable from ramsified causal/nomic truths, which are scrutable from ramsified microphysical truths (including microphysical laws).
(4) What about Newman's and Scheffler's problems?
The "contemporary Newman problem" isn't a problem for my thesis, as my ramsification base isn't an observational base. As for Scheffler's problem: my first reaction (though this really is quick) is that Scheffler's example involves either ramsifying a trivial theory or giving an incomplete regimentation (and then ramsification) of a nontrivial theory. If those material conditionals really constitute the whole content of the theory (and the theory gives the whole content of the relevant theoretical term), then it's trivial in the way suggested. If the theory is formulated more completely e.g. with nomic or causal conditionals, the objection won't arise. Certainly the problem won't arise for the Ramsey sentences that my procedure yields.
(5) Why think special science truths are scrutable?
The arguments for scrutability of special science truths are in Chapters 3 and 4 (supplemented by 6), which are not discussed in the critical notice. The excursus on the unity of science is not intended as a primary argument for scrutability of special science truths. Rather, it is connecting the scrutability thesis to the unity/reduction literature and making the case that the thesis is a weak sort of unity/reduction thesis that survives common objections to unity or reduction theses.

[i've de-e.e.-cummingsified dc's decapitalization - jk.]

Wednesday, 9 April 2014

15th Congress of Logic, Methodology, and Philosophy of Science


CALL FOR PAPERS 

15TH CONGRESS OF LOGIC, METHODOLOGY AND PHILOSOPHY OF SCIENCE (CLMPS 2015)

University of Helsinki, Finland, 3-8 August 2015
http://www.helsinki.fi/clmps

SUBMISSION DEADLINE: 30 November 2014

The Congress of Logic, Methodology and Philosophy of Science (CLMPS) 
is organized every four years by the Division of Logic, Methodology 
and Philosophy of Science (DLMPS). The Philosophical Society of Finland, 
the Academy of Finland Centre of Excellence in the Philosophy the Social
Sciences (TINT), the Division of Theoretical Philosophy (Department 
of Philosophy, History, Culture and Art Studies) are proud to host the 15th
Congress of Logic, Methodology and Philosophy of Science (CLMPS 2015).
CLMPS 2015 is supported by University of Helsinki and the Federation 
of Finnish Learned Societies.

CLMPS 2015 is co-located with the European Summer Meeting 
of the Association for Symbolic Logic, Logic Colloquium 2015 
(the abstract submission for Logic Colloquium 2015 opens in early 2015).

The congress will host six plenary lectures and several invited lectures.
The names of the plenary lecture speakers and invited speakers will be
announced soon.

CLMPS 2015 calls for CONTRIBUTED PAPERS, CONTRIBUTED SYMPOSIA, 
and AFFILIATED MEETINGS in 17 thematic sections:

A. Logic

A1. Mathematical Logic
A2. Philosophical Logic
A3. Computational Logic and Applications of Logic
A4. Historical Aspects of Logic

B. General Philosophy of Science

B1. Methodology
B2. Formal Philosophy of Science and Formal Epistemology
B3. Metaphysical Issues in the Philosophy of Science
B4. Ethical and Political Issues in the Philosophy of Science
B5. Historical Aspects in the Philosophy of Science

C. Philosophical Issues of Particular Disciplines

C1. Philosophy of the Formal Sciences (incl. Logic, Mathematics, Statistics, Computer Science)
C2. Philosophy of the Physical Sciences (incl. Physics, Chemistry, Earth Science, Climate Science)
C3. Philosophy of the Life Sciences
C4. Philosophy of the Cognitive and Behavioural Sciences
C5. Philosophy of the Humanities and the Social Sciences
C6. Philosophy of the Applied Sciences and Technology
C7. Philosophy of Medicine
C8. Metaphilosophy

In addition, some submitted abstracts will be invited to contribute 
to the International Union of History and Philosophy of Science (IUHPS) Joint
Commission Symposium Sessions if the programme committee considers 
the abstracts well suited for IUHPS themes.

CONTRIBUTED PAPERS: Please submit an abstract of 300 words prepared 
for autonomous review. Accepted contributed papers will be allocated 
in total 30 minutes (20 min for the presentation + 10 min for the discussion).

CONTRIBUTED SYMPOSIA: Please submit an abstract of max. 1700 words prepared
for autonomous review.

The abstract should include: (i) a general description of the format and the topic 
of the proposed symposium and its significance (up to 500 words); (ii) a 300-word 
abstract of each paper (3-4 papers)

Each accepted contributed symposia will be allocated a full two-hour session.

AFFILIATED MEETINGS: Affiliated meetings are half-day to full day symposia
that run parallel to the CLMPS 2015 programme, and belong to the congress
programme. Please consult the CLMPS 2015 submission guidelines for further
information.

RULES FOR MULTIPLE PRESENTATIONS

+ Maximally one contributed individual paper
+ One is allowed to present a second paper of which one is a co-author, 
but then the main author of this paper must submit the paper and be registered
as a participant.
+ If one participates in a contributed symposia proposal, affiliated
meeting proposal or is an invited speaker, one is not allowed to submit 
an individual contributed paper in which one is the main author (it is
possible to be a co-author of a contributed paper, but then the main author
of this paper must submit the paper and be registered as a participant).

Abstracts should be submitted by using the CLMPS 2015 registration form:
http://ilmo.contio.fi/academiceventsabstract/

Authors are kindly asked to consult the detailed submission guidelines before submitting:
http://helsinki.fi/clmps/materials/guidelines.pdf

All questions about submissions should be directed to the congress sectary,
Ms. Päivi Seppälä (clmps-2015@helsinki.fi). The members of the program me 
committee, DLMPS committees and the local organising committee are listed here
http://clmps.helsinki.fi/committees.php

Hannes Leitgeb (Chair of the Programme Committee)
Ilkka Niiniluoto (Chair of the Local Organizing Committee)

Critical notice of Chalmers, Constructing the World (2012) (by Panu Raatikainen)

David Chalmers recently published an ambitious and fascinating new book,
Constructing the World. Oxford University Press, 2012.
A critical notice by Panu Raatikainen (University of Helsinki) is here:
Raatikainen, P. 2014. "Chalmers Blueprint of the World", International Journal of Philosophical Studies 22 (1):113-128.

Monday, 7 April 2014

Buchak on risk and rationality III: the redescription strategy

This is the third in a series of three posts in which I rehearse what I hope to say at the Author Meets Critics session for Lara Buchak's tremendous* new book Risk and Rationality at the Pacific APA in a couple of weeks.  The previous two posts are here and here.  In the first post, I gave an overview of risk-weighted expected utility theory, Buchak's alternative to expected utility theory.  In the second post, I gave a prima facie reason for worrying about any departure from expected utility theory: if an agent violates expected utility theory (perhaps whilst exhibiting the sort of risk-sensitivity that Buchak's theory permits), then her preferences amongst the acts don't line up with her estimates of the value of those acts.  In this post, I want to consider a way of reconciling the preferences Buchak permits with the normative claims of expected utility theory.

I will be making a standard move.  I will be redescribing the space of outcomes in such a way that we can understand any Buchakian agent as setting her preferences in line with her expectation (and thus estimate) of the value of that act.

Thursday, 3 April 2014

How should we measure accuracy in epistemology? A new result

In recent formal epistemology, a lot of attention has been paid to a programme that one might call accuracy-first epistemology.  It is based on a particular account of the goodness of doxastic states: on this account, a doxastic state -- be it a full belief, a partial belief, or a comparative probability ordering -- is better the greater its accuracy; Alvin Goldman calls this account veritism.  This informal idea is often then made mathematically precise and the resulting formal account of doxastic goodness is used to draw various epistemological conclusions.

In this post, the doxastic states with which I will be concerned are credences or partial beliefs.  Such a doxastic state is represented by a single credence function $c$, which assigns a real number $0 \leq c(X) \leq 1$ to each proposition $X$ about which the agent has an opinion.  Thus, a measure of accuracy is a function $A$ that takes a credence function $c$ and a possible world $w$ and returns a number $A(c, w)$ that measures the accuracy of $c$ at $w$:  $A(c, w)$ takes values in $[-\infty, 0]$.

Beginning with Joyce 1998, a number of philosophers have given different characterisations of the legitimate measures of accuracy: Leitgeb and Pettigrew 2010; Joyce 2009; and D'Agostino and Sinigaglia 2009.  Leitgeb and Pettigrew give a very narrow characterisation, as do D'Agostino and Sinigaglia:  they agree that the so-called Brier score (or some strictly increasing transformation of it) is the only legitimate measure of accuracy.  Joyce, on the other hand, gives a much broader characterisation.  I find none of these characterisations adequate, though I won't enumerate my concerns here.  Rather, in this post, I'd like to offer a new characterisation.

Friday, 28 March 2014

Counting Infinities

(Cross-posted at NewAPPS)

In his Two New Sciences (1638), Galileo presents a puzzle about infinite collections of numbers that became known as ‘Galileo’s paradox’. Written in the form of a dialogue, the interlocutors in the text observe that there are many more positive integers than there are perfect squares, but that every positive integer is the root of a given square. And so, there is a one-to-one correspondence between the positive integers and the perfect squares, and thus we may conclude that there are as many positive integers as there are perfect squares. And yet, the initial assumption was that there are more positive integers than perfect squares, as every perfect square is a positive integer but not vice-versa; in other words, the collection of the perfect squares is strictly contained in the collection of the positive integers. How can they be of the same size then?

Galileo’s conclusion is that principles and concepts pertaining to the size of finite collections cannot be simply transposed, mutatis mutandis, to cases of infinity: “the attributes "equal," "greater," and "less," are not applicable to infinite, but only to finite, quantities.” With respect to finite collections, two uncontroversial principles hold:

Part-whole: a collection A that is strictly contained in a collection B has a strictly smaller size than B.

One-to-one: two collections for which there exists a one-to-one correspondence between their elements are of the same size.

What Galileo’s paradox shows is that, when moving to infinite cases, these two principles clash with each other, and thus that at least one of them has to go. In other words, we simply cannot transpose these two basic intuitions pertaining to counting finite collections to the case of infinite collections. As is well known, Cantor chose to keep One-to-one at the expenses of Part-whole, famously concluding that all countable infinite collections are of the same size (in his terms, have the same cardinality); this is still the reigning orthodoxy.

In recent years, an alternative approach to measuring infinite sets is being developed by the mathematicians Vieri Benci (who initiated the project) Mauro Di Nasso, and Marco Forti. It is also being further explored by a number of people – including logicians/philosophers such as Paolo Mancosu, Leon Horsten and my colleague Sylvia Wenmackers. This framework is known as the theory of numerosities, and has a number of theoretical as well as more practical interesting features. The basic idea is to prioritize Part-whole over One-to-one; this is accomplished in the following way (Mancosu 2009, p. 631):

Informally the approach consists in finding a measure of size for countable sets (including thus all subsets of the natural numbers) that satisfies [Part-whole]. The new ‘numbers’ will be called ‘numerosities’ and will satisfy some intuitive principles such as the following: the numerosity of the union of two disjoint sets is equal to the sum of the numerosities.
Basically, what the theory of numerosities does is to introduce different units, so that on these new units infinite sets comes out as finite. (In other words, it is a clever way to turn infinite sets into finite sets. Sounds suspicious? Hum…) In practice, the result is a very robust, sophisticated mathematical theory, which turns the idea of measuring infinite sets upside down.

The philosophical implications of the theory of numerosities for the philosophy of mathematics are far-reaching, and some of them have been discussed in detail in (Mancosu 2009). Philosophically, the mere fact that there is a coherent, theoretically robust alternative to Cantorian orthodoxy raises all kinds of questions pertaining to our ability to ascertain what numbers ‘really’ are (that is, if there are such things indeed). It is not surprising that Gödel, an avowed Platonist, considered the Cantorian notion of infinite number to be inevitable: there can be only one correct account of what infinite numbers really are. As Mancosu points out, now that there is a rigorously formulated mathematical theory that forsakes One-to-one in favor of Part-whole, it is far from obvious that the Cantorian road is the inevitable one.

As mathematical theories, Cantor’s theory of infinite numbers and the theory of numerosities may co-exist in peace, just as Euclidean and non-Euclidean geometries live peacefully together (admittedly, after a rough start in the 19th century). But philosophically, we may well see them as competitors, only one of which can be the ‘right’ theory about infinite numbers. But what could possibly count as evidence to adjudicate the dispute?

One motivation to abandon Cantorian orthodoxy might be that it fails to provide a satisfactory framework to discuss certain issues. For example, Wenmackers and Horsten (2013) adopt the alternative approach to treat certain foundational issues that arise with respect to probability distributions in infinite domains. It is quite possible that other questions and areas where the concept of infinity figures prominently can receive a more suitable treatment with the theory of numerosities, in the sense that oddities that arise by adopting Cantorian orthodoxy can be dissipated.

On a purely conceptual, foundational level, the dispute might be viewed as one between Part-whole and One-to-one, as to which of the two is the most fundamental principle when it comes to counting finite collections – which would then be generalized to the infinite cases. They are both eminently plausible, and this is why Cantor’s solution, while now widely accepted, remains somewhat counterintuitive (as anyone having taught this material to students surely knows). Thus, it is hard to see what could possibly count as evidence against one or the other


Now, after having thought a bit about this material (prompted by two wonderful talks by Wenmackers and Mancosu in Groningen yesterday), and somewhat to my surprise, I find myself having a lot of sympathy for Galileo’s original response. Maybe what holds for counting finite collections simply does not hold for measuring infinite collections. And if this is the case, our intuitions concerning the finite cases, and in particular the plausibility of both Part-whole and One-to-one, simply have no bearing on what a theory of counting infinite collections should be like. There may well be other reasons to prefer the numerosities approach over Cantor’s approach (or vice-versa), but I submit that turning to the idea of counting finite collections is not going to provide relevant material for the dispute in the infinite cases. In fact, from this point of view, an entirely different way of measuring infinite collections, where neither Part-whole nor One-to-one holds, is at least in principle conceivable. In what way the term ‘counting’ would then still apply might be a matter of contention, but perhaps counting infinities is a totally different ball game after all.

Thursday, 27 March 2014

CFP: *Extended Deadline* Symposium on the Foundations of Mathematics, Kurt Gödel Research Center, University of Vienna, 7-8 July 2014.

Date and Venue: 7-8 July 2014 - Kurt Gödel Research Center, Vienna

Confirmed Speakers:
  • Sy-David Friedman (Kurt Gödel Research Center for Mathematical Logic)
  • Hannes Leitgeb (Munich Center for Mathematical Philosophy)
Call for Papers: We welcome submissions from scholars (in particular, young scholars, i.e. early career researchers or post-graduate students) on any area of the foundations of mathematics (broadly construed). Particularly desired are submissions that address the role of set theory in the foundations of mathematics, or the foundations of set theory (universe/multiverse dichotomy, new axioms, etc.) and related ontological and epistemological issues. Applicants should prepare an extended abstract (maximum 1,500 words) for blind review, and send it to sotfom [at] gmail [dot] com. The successful applicants will be invited to give a talk at the conference and will be refunded the cost of accommodation in Vienna for two days (7-8 July).

*New* Submission Deadline: 15 April 2014
Notification of Acceptance: 30 April 2014

Set theory is taken to serve as a foundation for mathematics. But it is well-known that there are set-theoretic statements that cannot be settled by the standard axioms of set theory. The Zermelo-Fraenkel axioms, with the Axiom of Choice (ZFC), are incomplete. The primary goal of this symposium is to explore the different approaches that one can take to the phenomenon of incompleteness. One option is to maintain the traditional “universe” view and hold that there is a single, objective, determinate domain of sets. Accordingly, there is a single correct conception of set, and mathematical statements have a determinate meaning and truth-value according to this conception. We should therefore seek new axioms of set theory to extend the ZFC axioms and minimize incompleteness. It is then crucial to determine what justifies some new axioms over others. Alternatively, one can argue that there are multiple conceptions of set, depending on how one settles particular undecided statements. These different conceptions give rise to parallel set-theoretic universes, collectively known as the “multiverse”. What mathematical statements are true can then shift from one universe to the next. From within the multiverse view, however, one could argue that some universes are more preferable than others. These different approaches to incompleteness have wider consequences for the concepts of meaning and truth in mathematics and beyond. The conference will address these foundational issues at the intersection of philosophy and mathematics. The primary goal of the conference is to showcase contemporary philosophical research on different approaches to the incompleteness phenomenon. To accomplish this, the conference has the following general aims and objectives: (1) To bring to a wider philosophical audience the different approaches that one can take to the set-theoretic foundations of mathematics. (2) To elucidate the pressing issues of meaning and truth that turn on these different approaches. (3) To address philosophical questions concerning the need for a foundation of mathematics, and whether or not set theory can provide the necessary foundation.

Scientific Committee: Philip Welch (University of Bristol), Sy-David Friedman (Kurt Gödel Research Center), Ian Rumfitt (University of Birmigham), John Wigglesworth (London School of Economics), Claudio Ternullo (Kurt Gödel Research Center), Neil Barton (Birkbeck College), Chris Scambler (Birkbeck College), Jonathan Payne (Institute of Philosophy), Andrea Sereni (Università Vita-Salute S. Raffaele), Giorgio Venturi (Université de Paris VII, “Denis Diderot” - Scuola Normale Superiore)

Organisers: Sy-David Friedman (Kurt Gödel Research Center), John Wigglesworth (London School of Economics), Claudio Ternullo (Kurt Gödel Research Center), Neil Barton (Birkbeck College), Carolin Antos (Kurt Gödel Research Center)

Conference Website: sotfom [dot] wordpress [dot] com

Further Inquiries: please contact Claudio Ternullo (ternulc7 [at] univie [dot] ac [dot] at) Neil Barton (bartonna [at] gmail [dot] com) John Wigglesworth (jmwigglesworth [at] gmail [dot] com)

Tuesday, 25 March 2014

Buchak on risk and rationality II: the virtues of expected utility theory

In a previous post, I gave an overview of the alternative to expected utility theory that Lara Buchak formulates and defends in her excellent new book, Risk and Rationality (Buchak 2013).  Buchak dubs the alternative risk-weighted expected utility theory.  It permits agents to have risk-sensitive attitudes.  In this post and the next one, I wish to argue that risk-weighted expected utility theory is right about the constraints that rationality places on our external attitudes, but wrong about the way our internal attitudes ought to combine to determine those external attitudes (for the internal/external attitude terminology, as well as other terminology in this post, please see the previous post):  that is, I agree with the axioms Buchak demands our preferences must satisfy, but I disagree with the way she combines probabilities, utilities, and risk attitudes to determine those preferences.  I wish to argue that, in fact, we ought to combine our internal attitudes in exactly the way that expected utility theory suggests.  In order to maintain both of these positions, I will have to redescribe the outcomes to which we assign utilities.  I do this in the next post.  In this post, I want to argue that all the effort that we will go to in order to effect this redescription is worth it.  That is, I want to argue that there are good reasons for thinking that an agent's internal attitudes ought to be combined to give her external attitudes in exactly the way prescribed by expected utility theory. (These three posts will together provide the basis for my commentary on Buchak's book at the Pacific APA this April.)

Friday, 21 March 2014

Logical foundations for mathematics? The first-order vs. second-order ‘dichotomy’? (Part IV of 'Axiomatizations of arithmetic...')

(It took me much longer than I had anticipated to get back to this paper, but here is the final part of my paper on axiomatizations of arithmetic and the first-order/second-order divide. Part I is here; Part II is here; Part III is here. As always, comments are welcome!)

3. Logical foundations for mathematics? The first-order vs. second-order ‘dichotomy’?

Given the (apparent) impossibility of tackling the descriptive and deductive projects at once with one and the same underlying logical system – what Tennant (2000) describes as ‘the impossibility of monomathematics’ – what should we conclude about the general project of using logic to investigate the foundations of mathematics? And what should we conclude about the first-order vs. second-order divide? I will discuss each of these two questions in turn.

If the picture sketched in the previous sections is one of partial failure, it can equally well be seen as a picture of partial success. Indeed, a number of first-order mathematical theories can be made to be categorical with suitable second-order extensions (Read 1997). And thus, as argued by Read, there is a sense in which the completeness project of the early days of formal axiomatics has been achieved (despite Gödel’s results), namely in the descriptive sense countenanced by Dedekind and others.  

Moreover, categoricity failure must not be viewed as a complete disaster, if one bears in mind Shapiro’s (1997) useful distinction between algebraic and nonalgebraic theories:
Roughly, non-algebraic theories are theories which appear at first sight to be about a unique model: the intended model of the theory. We have seen examples of such theories: arithmetic, mathematical analysis… Algebraic theories, in contrast, do not carry a prima facie claim to be about a unique model. Examples are group theory, topology, graph theory… (Horsten 2012, section 4.2)
In this vein, proofs of (non-)categoricity can be viewed as a means of classifying algebraic and non-algebraic theories (Meadows 2013). This means that the descriptive (non-algebraic) project of picking out a previously chosen mathematical structure and describing it in logical terms has developed into the more general descriptive project of studying theories and groups of theories not only insofar as they instantiate unique structures (i.e. non-algebraic as well as algebraic versions of the descriptive project).

On the deductive side, things may seem less rosy at first sight. In a sense, first-order logic is not only descriptively inadequate: it is also deductively inadequate, given the impossibility of a deductively complete first-order theory of the natural numbers, and the fact that first-order logic itself is undecidable (though complete). It does have a better behaved underlying notion of logical consequence when compared to second-order logic, but it still falls short of delivering the deductive power that e.g. Frege or Hilbert would have hoped for. In short, first-order logic might be described as being ‘neither here nor there’.

However, if one looks beyond the confines of first-order or second-order logic, developments in automated theorem proving suggest that the deductive use as described by Hintikka is still alive and kicking. Sure enough, there is always the question of whether a given mathematical theorem, formulated in ‘ordinary’ mathematical language, is properly ‘translated’ into the language used by the theorem-proving program. But automated theorem proving is in many senses a compelling instantiation of Frege’s idea of putting chains of reasoning to test.

Recently, the new research program of homotopy type-theory promises to bring in a whole new perspective to the foundations of mathematics. In particular, its base logic, Martin-Löf’s constructive type-theory, is known to enjoy very favorable computational properties, and the focus on homotopy theory brings in a clear descriptive component. It is too early to tell whether homotopy type-theory will indeed change the terms of the game (as its proponents claim), but it does seem to offer new prospects for the possibility of unifying the descriptive perspective and the deductive perspective.

In sum, what we observe currently is not a complete demise of the original descriptive and deductive projects of pioneers such Frege and Dedekind, but rather a transformation of these projects into more encompassing, more general projects.

As for the first-order vs. second-order divide, it may be instructive to look in more detail into the idea of second-order extensions of first-order theories, specifically with respect to arithmetic. Some of these proposals can be described as ‘optimization projects’ that seek to incorporate the least amount of second-order vocabulary so as to ensure categoricity, while producing a deductively well-behaved theory. In other words, the goal of an optimal tradeoff between expressiveness and tractability may not be entirely unreasonable after all.

One such example is the framework of ‘ancestral logic’ (Avron 2003, Cohen 2010). Smith (2008) argues on plausible conceptual grounds that our basic intuitive grasp of arithmetic surely does not require the whole second-order conceptual apparatus, but only the concept of the ancestral of a relation, or the idea of transitive closure under iterable operations (my parents had parents, who in turn had parents, who themselves had parents, and so on). Another way to arrive at a similar conclusion is to appreciate that what is needed to establish categoricity by extending a first-order theory is nothing more than the expressive power required to formulate the induction schema, or equivalently the last, second-order axiom in the Dedekind/Peano axiomatization (the one needed to exclude ‘alien intruders’). Here again, the concept of the ancestral of a relation is a plausible candidate (Smith 2008, section 3; Cohen 2010, section 5.3).

Extensions of first-order logic with the concept of the ancestral yield a number of interesting systems (Smith 2008, section 4; Cohen 2010, chapter 5). These systems, while not being fully axiomatizable (Smith 2008, section 4), enjoy a number of favorable proof-theoretical properties (Cohen 2010, chapter 5). Indeed, they are vastly ‘better behaved’ from a deductive point of view than full-blown second-order logic – and of course, they are categorical.

Significant for our purposes is the status of the notion of the ancestral, straddled between first-order and second-order logic. Smith argues that the fact that this notion can be defined in second-order terms does not necessarily mean that it is an essentially higher-order notion:

In sum, the claim is that the child who moves from a grasp of a relation to a grasp of the ancestral of that relation need not thereby manifest an understanding of second-order quantification interpreted as quantification over arbitrary sets. It seems, rather, that she has attained a distinct conceptual level here, something whose grasp requires going beyond a grasp of the fundamental logical constructions regimented in first-order logic, but which doesn’t takes as far as an understanding of full second-order quantification. (Smith 2008)

What this suggests is that the first-order vs. second-order divide itself may be too coarse to describe adequately the conceptual building blocks of arithmetic. It is clear that purely first-order vocabulary will not yield categoricity, but it would be misguided to view the move to full-blown second-order logic as the next ‘natural’ step. In effect, as argued by Smith, the concept of the ancestral of a relation is essentially neither first-order nor second-order, properly speaking. So maybe the problem lies precisely in the coarse first-order vs. second-order dichotomy when it comes to the key concepts at the foundations of arithmetic (such as the concept of the ancestral, or Dedekind’s notion of chains). We may need different, intermediate categories to classify and analyze these concepts more accurately.


4. Conclusions

My starting point was the observation that first-order Peano Arithmetic is non-categorical but deductively well-behaved, while second-order Peano Arithmetic is categorical but deductively ill-behaved. I then turned to Hintikka’s distinction between descriptive and deductive approaches for the foundations of mathematics. Both approaches were represented in the early days of formal axiomatics at the end of the 19th century, but the descriptive approach was undoubtedly the predominant one; Frege was then the sole representative of the deductive approach.

Given the (apparent?) impossibility of combining both approaches in virtue of the orthogonal desiderata of expressiveness and tractability, one might conclude (as Tennant (2000) seems to argue) that the project of providing logical foundations for mathematics itself is misguided from the start. But I have argued that a story of partial failure is also a story of partial success, and that both projects (descriptive and deductive) remain fruitful and vibrant. I have also argued that an investigation of the conceptual foundations of arithmetic seems to suggest that the first-order vs. second-order dichotomy is in fact too coarse, as some key concepts (such as the concept of the ancestral of a relation) seem to inhabit a ‘limbo’ between the two realms.

One of the main conclusions I wish to draw from these observations is that there is no such thing as a unique project for the foundations of mathematics. Here we focused on two distinct projects, descriptive and deductive, but there may well be others. While it may seem that these two perspectives are incompatible, there is both the possibility of ‘optimization projects’, i.e. the search for the best trade-off between expressive and deductive power (e.g. ancestral arithmetic), and the possibility that an entirely new approach (maybe homotopy type-theory?) may even dissolve the apparent impossibility of fully engaging in both projects at once. It is perhaps due to an excessive focus on the first-order vs. second-order divide that we came to think that the two projects are incompatible.

At any rate, the choice of formalism/logical framework will depend on the exact goals of the formalization/axiomatization. Here, the focus has been on the expressiveness-tractability axis, but there may well be other relevant parameters. Now, if we acknowledge that there may be more than one legitimate theoretical goal when approaching mathematics with logical tools (and here we discussed two, prima facie equally legitimate approaches: descriptive and deductive), then there is no reason why there should be a unique, most appropriate logical framework for the foundations of mathematics. The picture that emerges is of a multifaceted, pluralistic enterprise, not of a uniquely defined project, and thus one allowing for multiple, equally legitimate perspectives and underlying theoretical frameworks. A plurality of goals suggests a form of logical pluralism, and thus, perhaps there is no real ‘dispute’ between first-order and second-order logic in this domain.