Tuesday, 22 January 2013

Call for papers on the work on Leon Henkin

Call for papers to the volume:
Leon Henkin (Essays on His Contributions). María Manzano, Ildiko Sain and Enrique Alonso eds
Springer Basil is going to publish a volume of contributed papers devoted to the life and work of Leon Henkin. This volume will appear in the Studies in Universal Logic series. Algebraic logic, model theory, type theory, completeness theorems, philosophical and foundational studies are among the topics we would like to cover, as well as mathematical education. We plan to discuss Henkin's intellectual development, his relation to his predecessors and contemporaries, and his impact in recent development of mathematical logic. To our knowledge, no books of this kind have been published. This is a call for papers inviting all the interested people to send a contribution. The book is intended to include several invited contributions and articles that will be selected from the submitted material. We are going to elaborate a monographic study on:
i. Henkin’s scientific works. 
ii. The influence of his work throughout the development of contemporary Logic and Mathematics. 
iii. His personal interest in teaching and the didactics of formal sciences with special attention to underrepresented minorities; in particular, mathematically talented minority undergraduate students
The articles must comply with one of two modalities:
a. Research articles which must pertain to one of the three themes listed above.  
b. Biographic notes in the form of short stories that describe personal experiences related with the figure of Leon Henkin as a professor, lecturer, investigator or pedagogue. 
The deadline for submissions is September 30th 2013.

We feel that it is about time for a comprehensive book on the life and works of Leon Henkin to be written. It will include both foundational material and a logic perspective.

María Manzano, Ildiko Sain and Enrique Alonso

(For more information, please visit the website http://logicae.usal.es/henkin)

Sunday, 20 January 2013

Other Formulations of the Quine-Putnam Indispensability Argument

In the previous post, I formulated the Quine-Putnam Indispensability argument as follows.
The Quine-Putnam Indispensability Argument (JK)
(1) Mathematicized theories are inconsistent with nominalism.
(2) Our best scientific theories are mathematicized.
(C) So, if one accepts our best scientific theories, one must reject nominalism.
I believe that this formulation is quite faithful to the intentions of Quine and Putnam, as well as to the intentions those of those involved in the early phase of the debate: Field, Burgess, Shapiro, Chihara and Hellman.

In contrast, here is Mark Colyvan's much more recent formulation of the argument (from his Stanford Encyclopedia entry, "Indispensability Arguments in the Philosophy of Mathematics"):
The Quine-Putnam Indispensability Argument (MC):
(P1) We ought to have ontological commitment to all and only the entities that are indispensable to our best scientific theories.
(P2) Mathematical entities are indispensable to our best scientific theories.
(C) We ought to have ontological commitment to mathematical entities.
This kind of formulation has become quite widely cited. But I think it is mistaken as a formulation of what Quine and Putnam had in mind. Here are the reasons.

First, Quine and Putnam were not primarily advocating a view about what "we ought to have ontological commitment to". For Quine and Putnam, ontological commitment is a semantic property of sentences and theories, not a normative epistemic property of human beings. Whether a theory $T$ implies that there are $F$s can be established by regimenting this theory into some precise canonical notation, as $T^{\ast}$, say. And seeing if, thus regimented, $T^{\ast}$ logically implies $\exists x Fx$. Whether one "accepts" this theory or not is immaterial, as Quine emphasized in 1948. Admittedly, Quine and Putnam do sometimes talk loosely, about "accepting abstract entities into our ontology", but that really is loose talk, elliptical for "accepting some theory which implies that there are abstract entities". Here, acceptance of the theory is an epistemic matter, separable from the semantic matter, concerning which sentences of the form $\exists x Fx$ the theory implies.

Second, the notion of "we" (i.e., human cognition) having "ontological commitment" to an entity is problematic. All parties to the debate up to, say, the mid-1990s took the notion of ontological commitment to involve a property of sentences and theories; we may accept, or may not accept, sentences and theories (representations, if you prefer; or even propositions, if you prefer). Those sentences, theories, etc., may, or may not, imply that there are $F$s. The notion of cognition bearing "ontological commitment" to an entity is a bit mysterious to me. Is the relation semantical? All knowledge of the world is mediated by cognitive representations, and the whole idea of direct contact with objects is something I'm a rather sceptical about. (In other words, I am defending Lockean indirect realism.)

Third, there is an epistemological side of things, for Quine and Putnam. This comes from a background acceptance of science. For both, we are to accept science roughly as is. Quine's long-held view might be called a kind of realistic pragmatism; and Putnam's view (as of 1971, Philosophy of Logic, and 1975, "What is Mathematical Truth?"), was classic scientific realism. As philosophers, we then subject science---that we have already accepted---to analysis. This position is defended by Russell too, in his 1950 article "Logical Positivism":
For my part, I assume that science is broadly speaking true, and arrive at the necessary postulates by analysis. But against the thoroughgoing sceptic I can advance no argument except that I do not believe him to be sincere. (Russell, 1950)
Both Quine and Putnam reject a certain kind of First Philosophy. They are not arguing from the perspective of the armchair First Philosopher or Cartesian, who has purged her mind of all "commitments" and who is wondering what to "accept" from a baseline of noble ontological innocence. On the contrary, one already accepts science, as is. One is subjecting science itself to analysis.

Finally, it seems misleading to me to say that the Quine-Putnam indispensability argument is meant to provide reasons for accepting the existence of mathematical entities. Rather, it argues that our working scientific theories presuppose the existence of mathematical entities, and are the best theories around. Consequently, we are being "intellectually dishonest" if we accept science, while feigning to reject those entities.

The Quine-Putnam Indispensability Argument

Many years ago I finished my PhD, entitled "The Mathematicization of Nature" (1998, LSE), in which I discussed the applicability of mathematics, the Quine-Putnam indispensability argument and considered a number of nominalist responses to it, in the end rejecting them all. The monograph Burgess & Rosen 1997, A Subject with No Object, had appeared a year earlier. At the time, I'd considered the issue definitively settled. And so I decided not to bother publishing anything in the area, as it would be pointless. (I did publish Ch. 5, which was about truth theories and deflationism.)

Jeez was I wrong! In the last fourteen years, the debate about the indispensability argument has continued, taking off in many different directions. And I'm pretty baffled at the whole thing. Even the formulation of the Indispensability Argument often given is incorrect, as far as I can see. So, here is mine, and I think it is reasonably faithful to the intentions of both Quine and Putnam.

1. Nominalism

Nominalism (in mathematics) is the claim that there are no numbers, sets, functions, and so on. (In addition, nominalism normally implies also that there are no syntactical types: i.e., finite sequences of symbols. Consequently there is a problem for nominalism at the level of syntax, a problem discussed long ago by Quine & Goodman 1947, "Steps Toward a Constructive Nominalism".) In particular, there are no mixed sets and no mixed functions. A mixed set is a set of non-mathematical entities, and a mixed function is a function whose domain or range includes some non-mathematical entities.

However, modern science is up-to-its-neck in mixed sets and functions. All the various quantities invoked in science are mixed functions. Laws of nature express properties of such mixed functions, and express relations between them. A differential equation in physics usually expresses some property of some mixed function(s). For example, it might say that a function defined on time instants has a certain property.

2. The Quine-Putnam Indispensability Argument 

Quine and Putnam both gave versions of an argument, which I formulate like this:
The Quine-Putnam Indispensability Argument
(1) Mathematicized theories are inconsistent with nominalism.
(2) Our best scientific theories are mathematicized.
(C) So, if one accepts our best scientific theories, one must reject nominalism. 
(The name "Quine-Putnam Indispensability Argument" derives, I believe, from Hartry Field.)

The argument for the first premise (1) is based on the following kind of example. Maxwell’s Laws include the mathematicized law:
At any spacetime point $p$, $(\underline{\nabla} \cdot \underline{B})(p) = 0$.
This is often abbreviated "$(\underline{\nabla} \cdot \underline{B}) = 0$", but it is clear that quantification over spacetime points is implicitly intended.

Since $\underline{B}$ is a vector field on spacetime, it is a mixed function, whose domain is spacetime, and whose range is some vector space (one that is isomorphic to $\mathbb{R}^3$). If nominalism is true, it follows that $\underline{B}$ does not exist, and therefore that Maxwell's Law, "$(\underline{\nabla} \cdot \underline{B}) = 0$", is false. (A slightly fancier version of this would refer instead to the electromagnetic field tensor $F_{ab}$, whose components unify the $\underline{B}$-field and the $\underline{E}$-field; but the considerations are more or less the same.) In general, if nominalism is true, then any such mathematicized theory is false. This establishes (1).

If this is right, then we have a major worry: this shows that a certain philosophical theory (nominalism) contradicts science. This is probably the central reason I am suspicious  of nominalism.

The argument for the second premise (2) requires one to compare our working mathematicized theories (Maxwell’s theory; Schroedinger equation; Einstein’s field equations; Yang-Mills gauge theories, etc.) with proposed nominalistic replacements. Having done this, one then concludes that either there are insuperable technical obstacles to the nominalization of such theories; or, though there may be, for certain mathematicized theories, nominalized replacements, even so, the mathematicized original is always a scientifically better theory, by scientific standards. (This is the sort of point emphasized by John Burgess, who semi-hemi-demi-jokingly suggested that nominalists might submit articles with their replacement theories to The Physical Review.)

So, our best scientific theories are mathematicized and are inconsistent with nominalism. Hence, if one accepts such theories, one must reject nominalism. This conclusion is epistemic only in a conditional sense. It simply says that one cannot have one’s cake and eat it. One cannot be a nominalist and a scientific realist.

3. Responses 

3.1 Rejecting (1): The rough idea is that mathematicized theories are consistent with nominalism. So, such theories may be true even though there are no mathematical entities. So, the magnetic field $\underline{B}$ doesn’t exist, but, even so, Maxwell’s Laws are true. This kind of view is advocated by Jody Azzouni (2004, Deflating Existential Consequence: A Case for Nominalism), but I'm not sure I quite understand it.

3.2 Rejecting (2): Our working scientific theories can be nominalized, and such theories are epistemically better. The betterness consists in the advantage that issues from the elimination of mathematicalia. This is essentially Hartry Field’s approach (Field 1980, Science Without Numbers).

3.3 Accepting, but living with, the conclusion: a nominalist might accept the Quine-Putnam argument, conceding the premises, but insist that one may “accept” mathematicized scientific theories in a weaker sense, which involves only accepting their nominalistic content. This is essentially Mary Leng’s and Joseph Melia's approach (Leng 2010, Mathematics and Reality; and Melia 2000, "Weaseling Aaway the Indispensability Argument" (Mind)).

Saturday, 19 January 2013

What is Metamathematics?

Mathematics consists in various theories. Theories of numbers, functions and sets. For example,
(1) the successor of no natural number is $0$.
(2) distinct natural numbers have distinct successors.
(3) if P holds for 0 and holds for $n+1$ whenever it holds for $n$, then it holds for all $n$.
or
(4) there is an empty set.
(5) if $x$ and $y$ are sets, then $\{x,y\}$ is a set.
(6) if $x$ is a set, then $\{y \in x \mid Py\}$ is a set.
and so on.
The objects (or the prima facie objects: the values of variables) of mathematics are numbers, functions, sets, spaces, structures, fields, categories, and so on.

Metamathematics, then, is the the study of mathematical theories. Its objects are mathematical theories. For example, we have a lot of knowledge about formalized mathematical theories:
(7) for any numbers $n, k$, we have: $Q \vdash \underline{n+k} = \underline{n} + \underline{k}$.
(8) $Q \nvdash \forall x \forall y(x + y = y + x)$.
(9) If $PA$ is consistent, then $PA + \neg Con(PA)$ is consistent.
(10) $PA$ is not finitely axiomatizable.
(11) If every finite subset of $T$ has a model, then $T$ has a model.
Within metamathematics, the objects referred to needn't only be syntactical entities. It is routine in metamathematics to talk of numbers, functions, sets and models, as well as syntactic strings, formulas, derivations, and theories.

There is a sense also in which metamathematics counts as part of mathematics. For the statements (7)-(11) above will all be found in various mathematics textbooks (called, e.g., A Mathematical Introduction to Logic, Computability and Logic, and so on), used in mathematics courses. One uses ordinary (informal) mathematical assumptions and methods to prove the results (7)-(11). This isn't to say that one can always close the gap between the object language theory and the metalanguage theory. (Tarski's theorem suggests that in some deep sense, one cannot.)

An interesting point is that the theories one has most understanding of in metamathematics (I mean the theories that we have definite results about, such as (7)-(11)) are theories like: $Q$, $PRA$, $I \Sigma_n$, $ACA_0$, $Z_2$, $Z$, $ZF$, etc. These are generally formalized theories given in formalized languages. So, such theories are not exactly the same as the informal theories that the mathematician themself might know and use. Presumably, there is some formalization relationship whereby the assumptions of informal number theory can be formalized into---i.e., translated into---say, the language of $PA$, and then proved.

There is an important metamathematical thesis concerning informal mathematical theories. This thesis grew out of the classic foundational work of Frege, Dedekind, Cantor, Peano, Zermelo and Russell, from say 1879 (Begriffsschrift) to 1910-2 (Principia Mathematica). It lacks a standard name, so let me call it the Z Thesis (you can read "Z" as "Zermelo", or just as "some kind of set theory, like $Z$, $ZF$, etc."):
The Z Thesis
Virtually all (say, 99.9%) informal mathematics can be formalized and proved in $ZFC$.
Indeed, often in something a lot weaker, such as a subsystem of second-order arithmetic. But, in fact, it turns out that very simple systems of arithmetic are more or less equivalent to simple systems of set theory. In particular, $PA$ is intimately related to $Z$ set theory with the negation of the axiom of infinity.

The Z Thesis is not a normative claim that informal mathematics should be reduced to set theory; it is a descriptively factual claim that it can be. This is developed in fairly rigorous detail in any introductory set theory textbook and in many first-year university mathematics courses, where the notions of pair, relation, function, sequence, etc., are all defined in set-theoretic terms.

The Z Thesis implies that:
Virtually all informal mathematics can be formalized (or "modelled", or "implemented", if you prefer) in a foundational theory whose basic concepts are:
$x \in y$ ("$x$ is an element of $y$")
$x = y$ ("$x$ is identical to $y$")
This is very puzzling. So far as I am aware, no one has any idea why this is so.

The Z Thesis connects informal mathematics to a certain underlying foundational system, which may be formalized quite precisely (i.e., $ZFC$). It has a certain empirical, or, to be more precise, historical aspect to it. For informal mathematics is what mathematician have done for centuries, and it is curious that what they have come up has this property (reducibility to a theory of membership and identity).

There are several other kinds of metamathematical claim---claims about mathematical theories---that relate to other disciplines.

First, cognitive science (broadly construed, to include psychology, neuroscience, linguistics and computer science). How is mathematical language cognized? How are mathematical theories recognized, conjectured, posited or learnt?

Second, epistemology & metaphysics. How are mathematical theories justified? What is modal status of mathematical theories? What is the structure of mathematicized scientific theories (i.e., the standard theories of science)?

Thursday, 17 January 2013

Chair of Philosophy of Mind (Munich)

The Faculty of Philosophy, Philosophy of Science and Religious Studies at LMU Munich invites applications for a
 
Full Professorship (Chair) of Philosophy of Mind
 
commencing as soon as possible. (The position is of type W3 in the German system.)
 
The holder of this position will also be a member of the interfacultative Munich Center for Neurosciences – Brain and Mind (MCN). Applicants should have a research focus in the field of Philosophy of Mind. Apart from representing the field of Philosophy of Mind, the appointed person is expected to (i) play a substantial role in shaping the philosophical component of the interdisciplinary cluster Brain and Mind at the MCN; and (ii) to engage actively in teaching and research to an adequate degree both within the Faculty of Philosophy and the Graduate School of Systemic Neurosciences (GSN-LMU) directed by the MCN.
 
Prerequisites for this position are a doctoral degree, teaching skills at university level, excellent academic achievements, and a productive and promising research program.
 
LMU Munich provides newly appointed professors with various types of support, such as welcoming services and assistance for dual career couples.
 
LMU Munich is an equal opportunity employer. The University continues to be very successful in increasing the number of female faculty members and strongly encourages applications from  female candidates. LMU Munich intends to enhance the diversity of its faculty members. Furthermore, disabled candidates with essentially equal qualifications will be given preference.
 
Please submit your application with the usual documents (CV, certificates, list of publications; publications upon request only) to Ludwig-Maximilians-University Munich, Faculty of Philosophy, Philosophy of Science and Religious Studies, Dekanat, Geschwister-Scholl-Platz 1, 80539 Munich, Germany, no later than February 18th, 2013.

Tuesday, 15 January 2013

The nonsense math effect

Good to know ;-)

Kimmo Eriksson
The nonsense math effect
Judgment and Decision Making, 7 (2012), pp. 746-749.

Abstract. Mathematics is a fundamental tool of research. Although potentially applicable in every discipline, the amount of training in mathematics that students typically receive varies greatly between different disciplines. In those disciplines where most researchers do not master mathematics, the use of mathematics may be held in too much awe. To demonstrate this I conducted an online experiment with 200 participants, all of which had experience of reading research reports and a postgraduate degree (in any subject). Participants were presented with the abstracts from two published papers (one in evolutionary anthropology and one in sociology). Based on these abstracts, participants were asked to judge the quality of the research. Either one or the other of the two abstracts was manipulated through the inclusion of an extra sentence taken from a completely unrelated paper and presenting an equation that made no sense in the context. The abstract that included the meaningless mathematics tended to be judged of higher quality. However, this "nonsense math effect" was not found among participants with degrees in mathematics, science, technology or medicine.

Monday, 14 January 2013

Kinds of Discourse about Fictional Entities

A theory of fictional entities aims to make overall semantic and ontological sense of discourse about fictional entities. It is quite difficult to do this. A recent survey article, "Fictional Entities", by Amie L. Thomasson (in A Companion to Metaphysics, eds., Kim, Sosa and Rosenkrantz, Blackwell, 2009) classifies four kinds of phenomena that any such theory should be able to account for:
(1)  Fictionalizing discourse (discourse within works of fiction), e.g. “[Holmes was] the most perfect reasoning and observing machine that the world has seen” in “A Scandal in Bohemia”. 
(2)  Nonexistence claims, e.g. “Sherlock Holmes does not exist”. 
(3)  Internal discourse by readers about the content of works of fiction. This may be either intra-fictional (reporting the content of a single work of fiction, e.g. “Holmes solved his first mystery in his college years,”) or cross-fictional (comparing the contents of two works of fiction, e.g. “Anna Karenina is smarter than Emma Bovary”). 
(4)  External discourse by readers and critics about the characters as fictional characters, e.g. “Holmes is a fictional character”, “Hamlet was created by Shakespeare”, “The Holmes character was modeled on an actual medical doctor Doyle knew”, “Holmes appears in dozens of stories”, “Holmes is very famous”.
Thomasson continues with a summary of the basic problem:
The puzzles for fictional discourse arise because many of the things we want to say about fictional characters seem in conflict with each other: How, for example, could Holmes solve a mystery if he doesn’t exist? How could Hamlet be born to Gertrude if he was created by Shakespeare? Any theory of fiction is obliged to say something about how we can understand these four kinds of claim in ways that resolve their apparent inconsistencies. And any theory of fictional discourse will have import for whether or not we should accept that there are fictional entities we sometimes refer to, and if so, what sorts of thing they are and what is literally true of them. 
UPDATE (15 Jan): Tim Button mentions in the comments below that an important fifth kind of discourse may have been omitted. Possibly Thomasson intended it to be covered by type (4), "external discourse", so I'll call it:
(4)* Mixed external discourse by readers expressing relations between the characters and non-fictional entities, e.g. “Jeff Ketland is smarter than Homer Simpson”, “My college is prettier than Hogwarts”.

Friday, 11 January 2013

Indirect proofs in the Prior Analytics

(Cross-posted at NewAPPS)

A few days ago I wrote a post on a dialogical conceptualization of indirect proofs. Not coincidentally, much of my thinking on this topic at the moment is prompted by the Prior Analytics, as we are currently holding a reading group of the text in Groningen. We are still making our way through the text, but here are some potentially interesting preliminary findings.
I am deeply convinced that the emergence of the technique of indirect proofs marks the very birth of the deductive method, as it is a significant departure from more ‘mundane’ forms of argumentation (as I argued before). So it is perhaps not surprising that the first fully-fledged logical text in history, the Prior Analytics, offers a sophisticated account of indirect proofs.

The first chapters of the Prior Analytics focus on showing which combinations of pairs of premises of the four categorical propositional forms (a: Every A is B; i: Some A is B; e: No A is B; o: Some A is not B) produce conclusions that follow ‘of necessity’. He first argues (chap. 4) that the so-called first-figure syllogisms are perfect (or complete, in Smith’s translation) because their validity is immediately apparent to us: it follows from the meaning of ‘Every’ and ‘No’ (Dici de omni/de nullo). What determines the figure of a syllogism is the position of the middle term with respect to the two other terms. (I am adopting the ‘A is B’ formulation, but Aristotle famously also uses the ‘B belongs to A’ schema: I use M for middle term, S for subject of the conclusion, and P for predicate of the conclusion).

First                Second                  Third
M/P                P/M                       M/P
S/M                S/M                       M/S
------              -------                     -------
S/P                 S/P                         S/P

What he does next is to show that valid syllogisms in the second and third figures can be shown to be valid by means of a process of ‘perfection’ (or ‘completion’), which consists in applying a few rules of inference (the perfect syllogisms themselves, conversion and subalternation) to pairs of premises so as to obtain a conclusion (see a paper of mine with Edgar Andrade for further details). So for example, the pair ‘No P is M, Every S is M’ can be shown to produce the conclusion ‘No S is P’ by an application of conversion to the first premise, which results in ‘No M is P’, and then we have Celarent, which is one of the first-figure perfect syllogisms.

Conversion consists in switching positions for subject and predicate. Naturally, since it is a matter of changing the relative disposition of the terms in the premises (so as to obtain the disposition that characterizes the first-figure syllogisms), conversion is the key device. But only the e and the i propositions convert simpliciter: from ‘No A is B’ we can infer ‘No B is A’ (and vice-versa), and the same for ‘Some A is B’. The a and o propositions do not convert (the a propositions are said to convert accidentally: ‘Every A is B’ converts to ‘Some B is A’). So when we have a pair of a and/or o premises, the proof-theoretical framework of syllogistic does not offer any devices to ‘perfect’ the pair in question, even though some such combinations do produce conclusions, such as Baroco (second figure: ‘Every P is M, some S is not M, thus some S is not P’) and Bocardo (third figure: ‘Some M is not P, every M is S, thus some S is not P’).

This is where indirect proofs come in. To perfect such syllogisms, what Aristotle calls the ‘ostensive’ (direct) approach (in the Striker translation; ‘probative’ in the Smith translation) will not do, simply because the proof-theoretical power of syllogistic is quite limited.

Aristotle contrasts the idea of an ostensive argument with that of an argument from an assumption/hypothesis (chapter A 23). Arguments leading to the impossible, which correspond to our notion of an indirect proof, are for him a kind of argument from an assumption/hypothesis. To illustrate an argument leading to the impossible, Aristotle actually offers a mathematical example, namely the proof of the incommensurability of the diagonal (41a26-28). This is important, as it suggests more than casual contact between mathematicians and philosophers at the time when the deductive method was taking shape almost simultaneously in both disciplines.

Elsewhere in the text, he uses the same approach to perfect the syllogisms that cannot be perfected ‘ostensively’ (directly), i.e. those containing premises and conclusions that cannot be converted. His usual procedure is the following: if you want to show that premises A and B produce conclusion C, you take A and the contradictory of the conclusion, not-C, and show that you can deduce not-B from A and not-C. The general idea can be represented as follows (I owe this schema to Leon Geerdink):
[not-C]            A
:
:
--------------
                 not-B               B
                     ----------------------
          ⊥
            --------------
         C
You can thus deduce C from A and B, but with the auxiliary hypothesis/assumption of the contradictory of C. The subproof from [not-C] and A to not-B is constituted of direct applications of the usual rules of inference of the system (conversion and/or one of the perfect syllogisms), and the fact that [not-C] is treated as a hypothesis is at this point an extra-logical, quasi-pragmatic property of the proof. (He uses the phrase ‘reached through an agreement’ to refer to the status of the hypothesis, which clearly has a dialectical flavor).

There is a beautiful formulation of the different stages of the proof in 41a23-26 (Striker translation):
All those who reach a conclusion through the impossible deduce the falsehood by a syllogism, but prove the initial thesis from a hypothesis, when something impossible results from the assumption of the contradictory.
He thus distinguishes the act of deducing (which corresponds above to going from not-C and A to not-B, which is the falsehood) from the act of proving, which refers to the whole argument leading to the main conclusion C, the ‘initial thesis’. The act of deducing here is an ostensive argument and corresponds to a subproof, whereas the act of proving (showing) corresponds to the whole demonstration. For the act of deducing, the status of the premises (taken as hypotheses or asserted) is irrelevant, but for the act of proving it makes all the difference that not-C is merely taken as a hypothesis at the beginning. This is an important distinction to keep in mind, and one of which Aristotle was already keenly aware.

Tuesday, 8 January 2013

Sets, Categories and Types

This is a pointer to a very nice post by Mike Shulman, at the n-Category Cafe, on set theory, category theory and type theory: "From Set Theory to Type Theory".

Many interesting and thought-provoking ideas there, clarifying certain conceptual differences between how set theory thinks of sets ("material set theory") and how category theory does, in a more structural way ("structural set theory"), and how type theory (perhaps) brings this together.

A dialogical conception of indirect proofs

In his commentary on Euclid, the 5th century Greek philosopher Proclus defines indirect proofs, or ‘reductions to impossibility’, in the following way (I owe this passage to W. Hodges, from this paper):
Every reduction to impossibility takes the contradictory of what it intends to prove and from this as a hypothesis proceeds until it encounters something admitted to be absurd and, by thus destroying its hypothesis, confirms the proposition it set out to establish. 
Schematically, a proof by reduction is often represented as follows:

[~A]
.
.
.
------
A

It is well know that indirect proofs pose interesting philosophical issues. What does it mean to assert something with the precise goal of then showing it to be false, i.e. because it leads to absurd conclusions? Why assert it in the first place? What kind of speech act is that? It has been pointed out that the initial statement is not an assertion, but rather an assumption, a supposition. But while we may, and in fact do, suppose things that we know are not true in everyday life (say, in the kind of counterfactual reasoning involved in planning), to suppose something precisely with the goal of demonstrating its falsity is a somewhat awkward move, both cognitively and pragmatically.

It seems to me (but this is ultimately an empirical hypothesis to be further investigated) that there are only two situations where this argumentative strategy is regularly used: mathematical and legal contexts (keep legal contexts in mind; they will come back). In other words, my claim is that when people are arguing at the pub or such like, they do not use reductio arguments. (This is something that psychologist David Over and I have a bet on: he thinks people do, I think they don’t. Maybe one day we’ll run a research project together to investigate ‘reductio in the wild’, so to speak.) UPDATE: In comments at NewAPPS, Branden Fitelson mentions this excellent paper on why it is so difficult (it is!) for students to understand the concept of an indirect proof. 

Even in the relevant circles of specialists, quite a few people have issues with indirect proofs, most famously intuitionists who reject double-negation elimination – the crucial step which goes from the rejection of ~A to the assertion of A. It is also often said that Frege’s account of inference as going from true statements to true statements leaves no room for indirect proofs (but here is a recent paper by Ivan Welty countering this claim). So even within mathematics and logic, indirect proofs are somewhat controversial.

If we accept that indirect proofs are a bit of an oddity even within mathematics, it makes sense to ask how on earth this argumentative strategy might have emerged and established itself as one of the most common ways to prove mathematical theorems. Now, as some readers may recall, my current research project focuses on ‘the roots of deduction’, adopting the hypothesis that we need to go back to deduction’s dialogical origins to make sense of the whole thing (as discussed here, for example). And here again, it seems that the dialogical, multi-agent perspective offers fresh insight into the nature of indirect proofs.

Assume a dialectical context in which two participants are disputing on a certain topic, and let us call them 1 and 2 and B to keep it neutral. Then imagine that 1 wants to convince 2 of proposition A; how can she go about? Well, she can propose ~A and see if 2 takes the bait. It is important that ~A be put forward in the form of a question (which is indeed how such disputations often began in ancient Greece, as attested for example by Aristotle’s Topics), so that by accepting ~A, 2 commits to its truth but not 1; 1 has merely put it forward as a question and thus has herself not endorsed ~A. 1 can now proceed to show that something absurd follows from the acceptance of ~A, because this is not her position; it is 2’s position. By showing that something absurd follows from ~A, 1 in fact shows that it was a bad idea for 2 to accept ~A in the first place. There is still the contentious last step which goes from ‘accepting ~A is a bad idea’ to ‘accepting A is a good idea’. But 1 has not done anything pragmatically incoherent because she herself never committed to ~A.

In legal contexts, reductio arguments are used in much the same way. The prosecution may claim A (the defendant was at the crime scene), and the defense may then show that, given additional background information, A leads to absurdity (say, to the possibility of traveling between Paris and London in less than 30 min). (Welty’s paper has a similar legal example.) So what you show as entailing absurdity in a reductio argument is in fact the position of your opponent, not your own position (not even your own assumption). The adversarial, multi-agent component is crucial to understand what it means to prove something indirectly; it makes the postulation of the strange speech-act of supposing precisely that which you want to prove to be false superfluous. In a purely mono-agent context, in contrast, she who formulates an indirect proof has to play awkwardly conflicting roles simultaneously. (Naturally, it is perfectly possible to formulate an indirect proof on your own, but this is a consequence of what I describe as the ‘internalization of opponent’ by the method itself.)

I think that this multi-agent, dialogical account of indirect proofs is conceptually appealing on its own, but within the Roots of Deduction project, we (Matthew Duncombe, Leon Geerdink and myself) are also investigating the historical plausibility of the hypothesis. For now, it is interesting to notice that, in the Prior Analytics, Aristotle makes extensive use of indirect proofs, as is well known, but also that he often uses dialectical vocabulary to explain the concept of an indirect proof. (In fact, he uses dialectical vocabulary throughout the text.) UPDATE: here is a subsequent post I wrote on indirect proof in the Prior Analytics.

(A cool coincidence is that just yesterday Mic Detlefsen invited me to present at his PhilMath Intersem colloquium in Paris in June, precisely on the topic of the history of indirect proofs. So there will be much work to be done on the topic for me, but for now this is my starting point.)

Two Conceptions of Metasemantics: Davidsonian and Lewisian

In this post I want to try and draw a distinction between two rather different ways of conceiving the methodology of semantic theory/theory of meaning. One conception I shall call Davidsonian and the other I shall call Lewisian.

1. Davidsonian Metasemantics: Quantification over Meaning Theories

On a broadly Davidsonian metasemantics, one aims to give a theory of meaning (cast as a compositional truth theory) for the idiolect of some particular speaker, let's say Kurt.
The central point I want to emphasize is that what one actually gives is a theory: a set of axioms. (It is not so important what the details of this theory are, or even whether it is a truth theory as opposed to an assignment of intensional meanings---the sort of thing about which Quinians and Davidsonians tend to be sceptical).
So long as everything goes ok, the axioms of this meaning/truth theory then logically imply Tarski-style T-sentences such as:
(i) "Es regnet" is true when uttered by Kurt at time t if and only if it is raining near Kurt at time t. 
These T-sentences of the truth theory are then tested by comparing them with more observational sentences which express the conditions under which the speaker hold-true the various sentences.
(ii) Kurt holds true "Es regnet" under circumstances that it is raining nearby.
I've called these statements of U-facts, for "usage facts". In this context, we can call them HT-sentences. The two kinds of statement here---the T-sentences of the truth theory and the HT-sentences recording the U-facts---are to be logically connected by a Principle of Charity: maximise the degree to which the sentences held true by Kurt are true (on the theory being tested).

On a Davidsonian conception of metasemantics, the two main features I want to focus on are:
(D1) The semantic theorist is quantifying over meaning (truth) theories proposed for a particular speaker.
(D2) The U-facts select the "right" theory via a Principle of Charity.
2. Lewisian Metasemantics: Quantification over Interpreted Languages

The Lewisian approach is conceptually quite different. On the one hand, one may describe all sorts of interpreted languages which may or may not be spoken by a speaker. These languages are, to all intents and purposes, abstract entities. The status of semantic theory is then quite different, for the description of an interpreted language consists in definitions and stipulations: a language $L$ may simply be defined to be such that: the alphabet of $L$ is ...., the $L$-strings are ..., the referent of string $\sigma_1$ in $L$ is ..., etc.
For example, one might define a language $L$ such that:
(iii) the proposition that $L$ assigns to "Es regnet" in context C = the proposition that it is raining (in C).
Now the problem of relating the language to the speaker is a problem of identifying which language the speaker speaks (or "cognizes", as I prefer to say). So, the U-facts are thought of as pinning down claims of the following kind,
(iv) Kurt cognizes $L$. 
How one does this is a rather complicated matter that I don't want to get into here. But, roughly, Kurt cognizes $L$ just when the meanings that Kurt assigns to $L$-strings are the meanings that $L$ assigns to those strings. (In fact, Lewis himself (Lewis 1975) gave a quite different analysis, in terms of social conventions, and disavowed the brief explanation just given.)

So, on a Lewisian conception of metasemantics, the two main features I want to focus on, corresponding to the Davidson case, are:
(L1) The semantic theorist is quantifying interpreted languages.
(L2) The U-facts select which language the agent speaks/cognizes.
3. Comparison

In discussing this topic on several occasions in talks over the last few years (I've given four or so talks on this material since 2008) and with colleagues, I've mentioned that debates formulated in the Davidsonian approach can often be reformulated within the Lewisian one, and vice versa. But, even so, I think there are definite theoretical advantages to the Lewisian conception. I don't want to go into them here, as they're a bit convoluted, so will write about them at a later point.

Sunday, 6 January 2013

The Abstract View: Two Analogies

What I've called the Abstract View of languages is the view set out by Lewis (1970, "General Semantics" and 1975, "Languages and Language") and Soames (1984, "What is a Theory of Truth?"). The view may be partially explicit, or implicit, in the writings of others (e.g., Tarski, Carnap, Montague). Languages are systems of syntax, along with (but not necessarily along with) semantics and pragmatics. Syntax is understood very liberally (anything can count as a symbol or a sign), and the assignments of meanings and pragmatic contents are arbitrary.

So, this is consistent with a kind of Semantic Conventionalism ("anything can mean anything"), which has the further consequence that meaning relations needn't be reduced to physicalistic/naturalistic relations. They are simply stipulated or defined mathematical functions, assigning meanings to strings. And, crucially, the syntactic, semantic and pragmatic properties of a language $\mathbf{L}$ are not dependent in any way on whether there are even any agents/minds that speak, or "cognize", the language. For example, there is no further ground level naturalistic or intentional fact in virtue of which the string "Schnee" means snow in German. It is a property (an essential property) of German that the referent in German of "Schnee" is snow. Because there is nothing physical/naturalistic "connecting" strings and their meanings, except the particular meaning function intrinsic to the language, then we expect list-like definitions of semantic notions, such as,
$x$ refers to $y$ in German if and only if either ($x$ = "Schnee" and $y$ = snow) or ($x$ = "Wasser" and $y$ = water) or ...
I think this resolves Hartry Field's request for a further reduction of "primitive denotation" (in his classic 1972 paper, "Tarski's Theory of Truth").

This concept of languages permits a certain division of labour in linguistic theory (and in philosophy of logic and language): syntax/semantics has been separated from the problem of language cognition. The syntax theorist is now free to dream up any system of syntax she likes; the semantic theorist is now free to dream up any system of syntax and semantic features she likes. These areas have been moved more-or-less entirely into applied mathematics. On the other hand, what is involved in cognizing, or speaking, or implementing, or realising, a language in some physical system (like a brain or computer) is now conceptually separate. Which patterns of neural activation occur during language acquisition or during particular speech acts, how linguistic stimulus inputs are processed, how token sounds and inscriptions are physically produced, etc, are problems of cognitive science, and not syntax or semantics per se.

I think there are two useful analogies for this view.

1. Computer Programs

A computer program is a sequence of instructions for performing a computation. But the computer program is itself not a concrete entity. In some sense, the physical system "implements" or "realizes" the program. As in the language case, one can study the properties of a computer program $P$ independently of its implementation. For example, one might show (mathematically) that the program $P$ will never halt on a certain input $n$. Or one might show that for any given input of size $n$, there is an upper bound $f(n)$ on how long, or how many steps, the program takes to compute an output.

So, on this analogy, languages are like computer programs. One can investigate their properties independently of their "implementation". And, if one has determined that a program has a certain properties, this will allow one to make inferences & predictions about how a physical system behaves if it "implements" that program. Analogously, one can study languages independently of whether they are "cognized"; and, if one has determined that a language has certain properties, this will allow one to make inferences & predictions about how an agent behaves if they "cognize" that language.

2. Abstract Structures in Applied Mathematics

Probably the earliest abstract structures that human minds found out about were the system $(\mathbb{N}, +, \times, \le)$ of natural numbers, and the system $(\mathbb{R}, +, \times, \le)$ of real numbers. Somehow, our ancestors got the basic ideas, and this developed until significant clarity was achieved in the 19th century. And similarly with the abstract structure of Euclidean space, $\mathbb{E}^3$: it was implicitly believed that physical space must have this structure, until it was realized that physical space needs to be distinguished from various mathematical spaces.

There is some reasonable sense in which, although our knowledge of these three abstract structures arose from our sensory experience, the study of the abstract structures could then be detached from questions about whether physical things "instantiate" these structures. So, the pure mathematician is left alone to study these structures, and countless others, including all sorts of generalizations (metric spaces, topological spaces, manifolds, rings, fields, groups, etc). And the applied mathematician/theoretical physicist focuses on those which have found instantiations (or approximate instantiations).

How exactly a mathematical structure is "instantiated" physically is an interesting and quite difficult philosophical problem, connected to debates about the applicability of mathematics and various indispensability arguments.

On the abstract view, languages are thought of in pretty much the same way as the theoretical physicist or applied mathematician might think of the abstract structures (manifolds, Lie groups, etc.) invoked in physics.

The Guitar Language

Nobody can be forbidden to use any arbitrarily producible event or object as a sign for something. (Frege 1892, "On Sense and Reference")
I have an extremely liberal notion of what a language is---Lewis's abstract view. A language is any bunch of syntax, possibly along with meaning functions. The syntax can be pretty much anything; the symbols can be pretty much anything; the meaning functions can be pretty much anything; and the meaning values can be pretty much anything.

I want to define a language $\mathbf{L}_G$, which I'll also call the Guitar Language.

First let me stipulate (just for this context here)  that "$e$" be a name of my Epiphone guitar, and "$r$" be a name for my Rickenbacker guitar and "$y$" be a name for my Yamaha guitar. So, $e$, $r$ and $y$ are my guitars. (You do understand this! I have just temporally augmented your idiolect by a local baptism.)

Second let me define three propositions:
$p_1 =$ the proposition that Paul McCartney is a lizard.
$p_2 =$ the proposition that Yoko Ono was born in Wrexham.
$p_3 =$ the proposition that Ringo Starr plays drums.
Finally I define the Guitar Language $\mathbf{L}_G$:
(1) the alphabet of $\mathbf{L}_G$ is $\{e, r, y\}$. The only strings in $\mathbf{L}_G$ are these symbols.
(2) the meanings of these strings, relative to $\mathbf{L}_G$, are given by the following meaning function $\mu_{\mathbf{L}_G}$:
$\mu_{\mathbf{L}_G}(e) = p_1$.
$\mu_{\mathbf{L}_G}(r) = p_2$.
$\mu_{\mathbf{L}_G}(y) = p_3$.
(3) For any guitar $g \in \{e, r, y\}$, an action is a speech act of asserting $\mu_{\mathbf{L}_G}(g)$ in $\mathbf{L}_G$ iff it consists in tapping the machine head of the G-string of $g$ twice, with one's left thumb.
This language $\mathbf{L}_G$ is a very simple "signalling language". It has only three symbols and no significant (i.e., combinatorial) syntax. The contents of the signals are (eternal) propositions (usually, of course, the content of a signal is indexical in some way, such as "The house is on fire right now!"). Now what would it mean for an agent (or mind) to speak/cognize the language $\mathbf{L}_G$? What would it mean for the mind of an agent to assign these propositions to these symbols?

It seems that the right thing to say is that an agent cognizes $\mathbf{L}_G$ by being disposed to perform the relevant speech act of $\mathbf{L}_G$ when in the right mental state; i.e., when a symbol is "asserted", then the propositional content of the agent's mental state is identical to the propositional content of the symbol.

Unfortunately, speaking $\mathbf{L}_G$ requires being able to perform these speech acts, which means being able to tap the guitars, so even though you might speak/cognize $\mathbf{L}_G$, you might never get the chance to actually make an assertion in $\mathbf{L}_G$.

But just for good measure, here is a photo of the symbols, $e$, $r$ and $y$:


Saturday, 5 January 2013

Are there causally active mixed mathematical objects?

At the heart of the indispensability arguments against nominalism lie mixed mathematical objects - henceforth MMOs. A simple example of an MMO is the set of US presidents. Or the set of eggs in a fridge. The object counts as mathematical because it is a set, and it counts as "mixed" because its elements are concrete entities. Another example would be any function $f$ from the set of US presidents to $\{0,1\}$. This function $f$ would map a president $p$ to a number $f(p) \in \{0,1\}$. Another example would be an quantity: a quantity maps concrete things to abstract values. (This is a puzzling case though. Must there be a concretum for every value? Must there be a concrete for each value of mass-in-kilograms? Possibly one should say that a quantity is really some kind of structure built of properties.) A final example would be a physical field, such as the electric and magnetic fields, $\bf{E}$ and $\bf{B}$, usually unified into the electromagnetic field (written $F_{ab}$ in tensor notation). The fields $\bf{E}$ and $\bf{B}$ are vector fields on spacetime: they are (mixed) functions which assign abstract values to points in spacetime.

The question I am interested in is whether any of these MMOs ever counts as being causally active. For it is usually (and presumably rightly) assumed that pure mathematical entities---the set of natural numbers, the sine function, $\pi$, $\aleph_{57}$, etc.---are not causally active. But it seems to me that, according to physics itself, the electromagnetic field (which is, remember, an MMO, a mixed mathematical entity) is causally active.

For example, the Lorentz force law says, for a point particle of mass $m$ and charge $q$ and position vector $\mathbf{r}(t)$,
$m \frac{d^2 \mathbf{r}}{dt^2} = q(\mathbf{E} + \frac{d \mathbf{r}}{dt} \times \mathbf{B})$
So, the motion of the particle (its acceleration) is determined by the fields, $\mathbf{E}$ and $\mathbf{B}$, which are MMOs. Consequently, it seems that there are causally active mixed mathematical objects --- namely, physical fields.