*truthlikeness*" or "

*accuracy"*(also sometimes called "verisimilitude" or "approximate truth"), as might be expressed by,

$A$ is closer to the truth than $B$ iswhere $A$ and $B$ are statements or theories.

David Miller published (1974, "Popper's Qualitative Theory of Verisimilitude",

*BJPS*; and 1976, "Verisimilitude Redeflated",

*BJPS*) an interesting problem concerning trying to make sense of this concept. Because the notion seems clearly relevant to any decent theory of scientific method, Sir Karl Popper had previously tried to develop an explication of the concept, but it turned out to suffer a serious (separate) problem, discovered by David Miller also, set out in the 1974 paper (roughly, on that explication, all false theories are as truthlike as each other).

But the other problem - the Language Dependence Problem - is this. (It is explained also in Sc. 1.4.4 of Oddie, "Truthlikeness", SEP.) The statements $A$ and $B$ are, we suppose, both

*false*. But we should still, nonetheless, like to make sense of what it might mean for $A$ to be closer to the truth (or, more accurate) than $B$ is. For a scientific example, we intuitively would like to say that Einstein's relativistic equation for the kinetic energy of a point particle of mass $m$ at speed $v$,

$E_k = \frac{1}{2}mv^2 + \frac{3}{8} m \frac{v^4}{c^2} + \dots$is more accurate than the classical equation,

$E_k = \frac{1}{2}mv^2$.

(Miller 1975, "The Accuracy of Predictions" (

*Synthese*), explains how the language dependence problem arises here also, for such comparisons.)
Suppose $A$ and $B$ are false sentences in language $L$, and let the truth be $T$. I.e., $T$ is the single true statement that $A$ and $B$ are falsely approximating. Miller pointed out, given some very natural ways to measure the "distance" between $A$ (or $B$) and the truth, a language relativity appears. One such way is to count the number of "errors" in a false statement; and then the statement with least number of errors is closer to the truth.

I will give an example which is based on Miller's weather example, but a bit simpler. Let the language $L$ be a simple propositional language with, as its primitive sentences,

I will give an example which is based on Miller's weather example, but a bit simpler. Let the language $L$ be a simple propositional language with, as its primitive sentences,

$R$ ("it is raining")Suppose the truth $T$ is that it is not raining and it is cold. Let $A$ say that it is raining and it is cold and let $B$ say that it is raining and it is not cold. So, both $A$ and $B$ are false. In symbols, we have:

$C$ ("it is cold").

$T = \neg R \wedge C$.Which of $A$ or $B$ is more accurate? It seems intuitively clear that

$A = R \wedge C$.

$B = R \wedge \neg C$.

(1) $A$ is closer to the truth than $B$ is.For $R \wedge C$ makes one error, while $R \wedge \neg C$ makes two errors. For those interested in the fancier details, this is called the Hamming distance between the corresponding binary sequences. For this case, it amounts to (1,1) being closer to (0,1) than (1,0) is.

Miller's language dependence problem is that if we translate the statements into an equivalent language $L^{\ast}$, then we can

*reverse*this evaluation! We can get the translation of $B$ to be closer to the truth than the translation of $A$ is.

First, we define $L^{\ast}$ to have primitive sentences $R$ and a new sentence $E$, whose translation into $L$ is "it is raining if and only if it is cold". I.e., $R \leftrightarrow C$. One can "invert" this translation, and see that the translation of $C$ into $L^{\ast}$ is given by $R \leftrightarrow E$. (This is because if $\phi \equiv (\alpha \leftrightarrow \beta)$, then $\beta \equiv (\phi \leftrightarrow \alpha)$.)

Next we translate $T$, $A$ and $B$ into $L^{\ast}$ as follows:

$T^{\ast} = \neg R \wedge \neg E$.Expressed in the new language $L^{\ast}$, we have:

$A^{\ast} = R \wedge E$.

$B^{\ast} = R \wedge \neg E$.

(2) $B^{\ast}$ is closer to the truth than $A^{\ast}$ is.For $B^{\ast}$ makes only one error, while $A^{\ast}$ makes two errors.

Consequently, if we adopt this measure of distance from the truth, we can

*reverse*closeness to truth or accuracy by simply translating into an equivalent language (i.e., one that has different primitives).

In technical terms, the problem is this. We have placed a metric $d$ on the set $X$ of propositional assignments (or models, if you like), the Hamming distance. Indeed, $X$ is just the set of four binary ordered pairs, i.e.,

$X = \{(0,0),(0,1),(1,0),(1,1)\}$And, the Hamming distances are given by:

$d((0,0),(1,0)) = 1$So, $(X,d)$ is then a metric space. A Miller-style translation from $L$ to $L^{\ast}$ induces a bijection $f: X \to X$ of this space, but this mapping $f$ is

$d((1,0),(0,1)) = 2$

etc.

*not*an isometry of $d$.

In regards to Boddington's "most accurate report" comment on Leiter Reports, she does not say that she means the Daily Mail minimizes error. Indeed, she says it only "more or less" comes from the evidence at the inquest. Rather, what she says is that it *maximizes assertions made* (perhaps including the "following" embellishment) and that it (as compared to certain non-tabloid papers, called "quality" in scare quotes) is *informative about the identity of the author* (so that one may question them about the source of the witness statements and, as with Boddington, not get an answer).

ReplyDelete