Ultra-Finitism and Token Cognizability

Ultra-finitism is a philosophical doctrine in the foundations of mathematics. It is hard to analyse, in part because there is so little research literature in analytic philosophy on the topic. There is a large literature on nominalism---the doctrine that there are no mathematical entities period. I.e., there are no numbers, etc. (We express this claim in English, using the English words "there are no ...". If someone has a better suggestion, then it would be interesting to know what it is. Klingon?) But there is not much of a literature on the view that there are some, but only strictly finitely many of them. I looked up some of the literature on ultra-finitism (aka., "strict finitism") a couple of years ago and mentioned it in an earlier post, here.

The other reason for the difficulty in analysing the content of Ultra-Finitism is that its advocates are often mathematicians, and usually lack even minimal knowledge of analytic philosophy. For example, they will have not read Frege or Russell. This lack of knowledge might then permit a plethora of confusions: about use/mention, types/tokens, semantics/epistemology, etc. Consequently, one may come across the strange belief that numbers are concrete physical objects, like cans of beans. But, in such cases, it is almost certainly the case that the person is mixing up numbers with their tokens (e.g., a concrete inscription (perhaps in ink) of a canonical numeral which denotes the number) or with sets of concrete things with that cardinality. But, for example, the number 57 just isn't a physical token, and 57 also isn't a set $X$ of concrete things (e.g., chairs or rocks) such that $|X| = 57$.

Still, that said, I think one can attempt to provide a maximally charitable interpretation of Ultra-Finitism. I think there are two philosophical doctrines, giving an optimal interpretation. One epistemic and the other ontological:
Token Cognizability (TC)
If there is justification for asserting the existence of the value of a term $t$, then this justification is an actual token reduction of t.
Strict Finiteness (Fin)
There are terms for which there is no actual token reduction (e.g., the term "$2^{1000}$").
Many numerical terms do have token reductions. For example, here is a token reduction for the term "$(ss0 + sssss0) \cdot ss0$":
$(ss0  + sssss0) \cdot ss0 = ((ss0  + sssss0) \cdot s0) + (ss0  + sssss0)$
$= ((ss0  + sssss0) \cdot 0 + (ss0  + sssss0)) + (ss0  + sssss0)$ $= ((0 + (ss0  + sssss0)) + (ss0  + sssss0))$
$ = \dots$
$= sssssss0 + sssssss0$
$ = \dots$
$= ssssssssssssss0$
(Well, to be entirely honest, I'm cheating a little. If I hadn't written "$\dots$" twice, and had instead filled in the annoying computations of the sums, then this would have been a token reduction of the term; i.e., a token of a proof reducing $(2 + 5) \cdot 2$ to $14$.)

Given that many terms, having smallish values, do have token reductions, we can, by (TC), now assert the existence of these numbers. Let $t_{max}$ be the "largest" such term. However, (Fin) reminds us that there are terms $u$ for which this has not actually been done. It is easy to define such terms. E.g.,
$u_1 := t_{max} + 1$. 
Or:
$u_2 := 2^{t_{max}}$. 
So, let $N_{max}$ be the value of $t_{max}$. This is concretely realized. But since "larger" terms like $u_1$ and $u_2$ have not been token reduced, it follows no number larger than $N_{max}$ is concretely realized. Consequently, by (TC), we lack justification for asserting the existence of any number larger than $N_{max}$.

Anyone who is neither a nominalist nor an ultra-finitist (e.g., Hilbertian finitist, or Brouwerian constructivist, or Fefermanian predicativist, or a Godelian mathematical realist) can go on to conclude that there is some number $k$ which is not concretely realized. For example, if $N_{max}$ is the maximum concretely realized number, then simply let $k = N_{max}+1$.

Of course, the ultra-finitist will reply (correctly) that $k$ is not concretely realized, and that, by (TC), there is no reason to assert its existence. The opponent of ultra-finitism will reply that it's simply irrelevant whether a number is concretely realized or not. We have indirect reasons for asserting the existence of all the numbers, irrespective of their concrete realisation or token constructibility.

This is then an impasse. But it seems clear where the impasse lies: it lies in the assumption (TC), accepted by ultra-finitists, but rejected by others.

Comments

  1. Michael Dummett did a paper "Strict Finitism", in "Truth And Other Enigmas".

    ReplyDelete
  2. Thanks, Anon,

    I think you mean "Wang's Paradox?" Yes, I refer to it here,

    http://m-phi.blogspot.co.uk/2011/10/ultra-finitism-types-and-tokens.html

    Perhaps I should have linked to my previous posts ...

    Cheers,

    Jeff

    ReplyDelete
  3. I'm curious whether ultrafinitists believe all token reductions are comparable. The way I do it, deciding whether $t = s$ requires $2\min(t,s)+1$ steps. If I were an ultrafinitist, I would therefore doubt that the "top half" of (pre-reduced) tokens are incomparable (never mind those that aren't yet reduced).

    Cheers,

    François

    ReplyDelete
  4. It's hard to have a well-grounded guess on that. I'm not sure, because the view hasn't received anything like a canonical formulation, unlike, say, nominalism (Field), finitism (Hilbert, Tait, Kreisel) and predicativism (Feferman). The ultra-finitists whose work I've looked at are Doron Zeilberger, Edward Nelson & Vladimir Sazonov; but all, and others, have different emphases. E.g., Nelson is sceptical of exponentiation, and faster growing primitive recursive functions. Zeilberger and Sazonov are simply sceptical of large finite numbers. So, I'm trying to reconstruct how one obtains the scepticism about the larger numbers. This, I think, resolves down to an issue about actual token physical computations of values of terms and these being the only grounds for accepting such numbers.

    Yes, that count sounds right (you mean $val(t)$?). There will be computationally distinct reductions, and some more efficient than others, I guess. A lower bound on the length of a computation of $t = \underline{n}$ is $O(n)$, as this is the final line; but this can be reduced to $log(n)$ if one allows binary numerals. There is probably work in complexity theory on the length of these computations ...

    Cheers,

    Jeff

    ReplyDelete

Post a Comment