Hannes's talk is a response to the debate about whether a Ramsey sentence for a theory $\Theta$ can account for the inductive systematization of evidence given by $\Theta$ itself. This debate goes back to earlier works by Carl Hempel and Israel Sheffler (The Anatomy of Inquiry, 1963) and, in particular, a 1968 Journal of Philosophy paper, "Reflections on the Ramsey Method" by Sheffler. The debate has recently been revived in an interesting 2012 Synthese paper, "Ramsification and Inductive Inference", by Panu Raatikainen. The conclusion of this argument is that ramsification of a theory $\Theta$ damages the inductive systematization that the theory $\Theta$ provides. I recommend interested readers consult Panu's 2012 paper on this.
On Hannes's approach, one assigns a probability to a Ramsey sentence $\Re(\Theta)$, on the assumption that the corresponding Carnap sentence
$\Re(\Theta) \to \Theta$has probability 1. Since Carnap himself insisted that the Carnap sentence of a theory is analytic, it seems reasonable, on his perspective, to assign it probability 1. On this Carnapian assumption, it can then be shown that the probability of a theory and its Ramsey sentence are the same. (Hannes's discussion also related these probabilistic conclusions to the notion of logical probability, counting models of a theory, over a finite domain.)
$\Theta \vdash \phi$ if and only if $\Re(\Theta) \vdash \phi.$But suppose the Carnap sentence has probability 1. Then we can show that $\Theta$ and $\Re(\Theta)$ are probabilistically equivalent.
First, we give a lemma in probability theory:
Lemma:Proof. Reasoning using probability axioms,
$Pr(A) + Pr(A \to B) = Pr(B) + Pr(B \to A)$.
$Pr(A \to B) = Pr(\neg A \vee B)$So:
= $Pr(\neg A) + Pr(B) - Pr(\neg A \wedge B)$
= $1 - Pr(A) + Pr(B) - Pr(\neg (B \to A))$
= $1 + Pr(B) - Pr(A) - 1 + Pr(B \to A)$
= $Pr(B) - Pr(A) + Pr(B \to A)$.
$Pr(A) + Pr(A \to B) = Pr(B) + Pr(B \to A)$.QED.
Next, let $\Theta$ be a theory and let $\Re(\Theta)$ be its Ramsey sentence. Note that
$\Theta \vdash \Re(\Theta)$.(That is, one can deduce $\Re(\Theta)$ from $\Theta$ in a system of second-order logic, using comprehension.)
It follows that,
$Pr(\Theta \to \Re(\Theta)) = 1$.Suppose that the Carnap sentence, $\Re(\Theta) \to \Theta$, has probability 1. That is,
$Pr(\Re(\Theta) \to \Theta) = 1$.Then the Lemma above gives:
$Pr(\Re(\Theta)) = Pr(\Theta)$.So, given a theory $\Theta$. the probability of its Ramsey sentence equals the probability of the theory itself, on the assumption that its Carnap sentence has probability 1.
[UPDATE 20 April: I have made a few changes and modified the Lemma used to a slightly stronger one.]