The Modal Status of Semantic Facts, II

An interesting draft article,
"Semantics and Metasemantics in the Context of Generative Grammar"
by philosopher of language, Seth Yalcin, caught my attention.

The metasemantic view expressed seems to be the orthodoxy, and again indicates the fundamental schism in metasemantics that I've mentioned from time to time and which is discussed in the dialogue
"There's Glory for You!".
That is, what is the modal status of semantic (syntactic, etc.) facts?

On what I've dubbed the Abstract View (which I associate with Lewis, Soames and possibly Kripke; and perhaps before that with Tarski, Carnap and Montague), the metasemantic picture is this. Semantics studies languages. Languages need not be "spoken" or "cognized". For they may have weird or strange interpretations. They might be infinitary. Their syntax might be non-recursive. Etc. Thought experiments about the individuation properties of language strongly suggest that even the tiniest change in a language, across speakers, times and worlds, makes for a language difference. Consequently, if my idiolect $L$ is such that the string "cat" means CAT in $L$, then it is an essential (de re) property of $L$ that it does so. So, semantic facts are necessities.

The opposite view---which seems to be the orthodoxy in writing on metasemantics---says that T-sentences, like
(T) "Schnorble este fnoffle" is true in L if and only if snow is white
are contingent. But a number of considerations suggest that T-sentences express necessities. For example, if $L^{\ast}$ is a language such that "Schnorble este fnoffle" is not true in $L^{\ast}$ if and only if snow is white, then it seems that $L \neq L^{\ast}$.

("fnoffle" is a nonsense string that Torkel Franzen would use when joking.)

In my view, the advocates of the orthodox view are conflating the semantic properties of a language $L$ (which are necessary) with cognitive relations between the language $L$ (and its strings) and any agent that cognizes $L$ (which are contingent). Or perhaps they are including use-notions such as
(i) agent A uses string $\sigma$ to mean $m$ (or to refer to $x$)
(ii) agent A assigns the meaning $m$ to the string $\sigma$
on the semantic side, whereas I include such facts on the cognitive side, pertaining to how strings are used. In any case, all would agree that claims concerning usage or meaning-assignment, like (i) or (ii), would usually be contingent.

Having briefly introducing the abstract view in Section 4, Yalcin says:
Metasemantics, on this interpretation, would ask something like:
(6) In virtue of what is having the semantic value $m$ a necessary property of $e$?
But, it seems to me that, on the abstract view, this question can be compared with
In virtue of what is having the sine value $0.5$ a necessary property of $30^{\circ}$?
which seems weird. If, e.g.,
$f : A \to B$
is a function such that
$f(a) = b$,
then that's that. Similarly, if $L$ is a language such that $\mu_L(e) = m$, then that's that (where $\mu_L(.)$ is the meaning function for $L$). Rather, on the abstract view, metasemantics might ask questions like:
Given $L$ such that $\mu_L(e) = m$, what does cognizing $L$ involve?
Given $L$ such that $\mu_L(e) = m$, how do contingent facts about an agent A's usage of $e$ constrain that A cognizes $L$, rather than $L^{\ast}$, say?
As soon as we have clarified that languages are more like functions, mappings, structures, fibre bundles, etc., than they are like concreta, then questions about why a function has a certain value just seem bizarre. The question
In virtue of what is a certain matrix the identity of a matrix group?
is weird. A possible rejoinder to this might be to consider measurement theory. Consider the (injective) mixed function,
$f : P \to \{1, \dots, 44\}$
where $P$ is the set of US Presidents, past and present, up to now, and $\{1, \dots, 44\}$ is the interval $\{n \in \mathbb{N} \mid 1 \leq n \leq 44\}$, and such that $f^{-1}$ enumerates the Presidents in the order of their temporal succession. Then, the question,
In virtue of what is it the case that $f(\mbox{Clinton}) = 42$?
is by no means weird, because we have set this up so that there is a Representation Theorem, of the form,
for all $x,y \in P$: $y$ immediately succeeds $x$ as POTUS iff $f(y) = f(x) + 1$.
So, we can answer the above question with,
$f(\mbox{Clinton}) = 42$ in virtue of the fact that Clinton was the president immediately after GHWB and $f(\mbox{GHWB}) = 41$ and $\dots$.
But in the case of language cognition, it is not remotely clear that there is a corresponding Representation Theorem. (There might be. I don't know. I've thought about trying to make sense of a Representation Theorem for language cognition, but I can't see how to do it.)

Yalcin continues:
I take it as obvious that this yields the wrong conception of descriptive semantics, and of metasemantics. Semantic theory is not helpfully understood as an inquiry into the necessary truths about expressions. The facts uncovered in descriptive semantic inquiry are largely empirical and contingent. Metasemantics is interested in the ground of those facts.
and adds, in a footnote (12),
... but this idea is stupefying.
Is it really so strange?

On the abstract view, semantic theory asks many interesting questions about the semantic properties of a language $L$. For example,
  • what are the computational properties of language $L$? 
  • what are the properties of $L$'s consequence relation? 
  • what is the length of the shortest string expressing content $m$?
  • which objects (or sets, relations) are definable in $L$?
  • etc.
So, semantic theory is "helpfully understood as an inquiry into the necessary truths about ...." (languages), much as arithmetic may be understood as an inquiry into necessary truths about numbers. For this to seem more palatable, one need only separate:
  • mathematical problems concerning the semantic (syntactic, phonological, pragmatic) properties of languages; from ...
  • empirical questions about how languages are implemented, cognized, acquired, extended, etc., in some cognitive system.

Comments

  1. The question raised in my mind is how do we separate formal semantics from non-formal semantics? At what point is there semantic definition? Is it possible to have a formal schism between semantic concepts? Would this create a dangerous non-formal schism? At that point is semantics not universal enough? Is there, consequently, some language tool which is meant to replace semantics? In that case it is intriguing to muse that some aspect of mathematics may have semantic properties by virtue of the problem with semantics. But that isn't a problem with mathematics, is it? At what point does the formal concept of mathematics meet up with the formal concept of semantics? You have defined that they are separate majesteria. But if there is no bridge between formal semantics mathematica and formal semantics de lingua, then there is then no FORMAL concept of formality----that would be a problem, indeed, it would question and underscore the intensionality of forming BOTH mathematical and semantic concepts of formality.

    I hope someone will choose to respond to this, although I realize in some sense I am being arrogant.

    ReplyDelete

Post a Comment