Re: Litter on the Bayesian Way?

From: Eliezer Yudkowsky (
Date: Sat Sep 18 2004 - 19:32:25 MDT

Emil Gilliam wrote:
> From Cosma Shalizi's review of Deborah G. Mayo, "Error and the Growth
> of Experimental Knowledge":
> "Bayesians not only assign such probabilities, they do so a priori,
> condensing their prejudices into real numbers between 0 and 1 inclusive;
> two Bayesians cannot meet without smiling at each other's priors. True,
> they can shown that, in the limit of presenting an infinite amount of
> (consistent) evidence, the priors "wash out" (provided they're
> "non-extreme," not 0 or 1 to start with); but it has also been shown
> that, "for any body of evidence there are prior probabilities in a
> hypothesis H that, while nonextreme, will result in the two scientists
> having posterior probabilities in H that differ by as much as one wants"
> (p. 84n, Mayo's emphasis). This is discouraging, to say the least, and
> accords very poorly with the way that scientists actually do come to
> agree, very quickly, on the value and implications of pieces of
> evidence. Bayesian reconstructions of episodes in the history of
> science, Mayo says, are on a level with claiming that Leonardo da Vinci
> painted by numbers since, after all, there's some paint-by-numbers kit
> which will match any painting you please."

Call it a wild guess, but I guess you found this page by googling on
"Bayesian Way", just as I did a couple of years ago.

The simple reply is that Kolmogorov complexity makes a fine prior (with a
bit of tweaking you can make the distribution over all computations add up
to 1) and this formalism, known as Solomonoff induction, is that which
Occam's Razor approximates.

The deeper, more philosophical reply is, (a), Cox's Theorem establishes
that you must use Bayesian updating of your probabilities if your reasoning
is to obey simple consistency axioms and this would not be ambiguous even
if priors were; (b) abandoning Bayesian consistency would hardly make the
problem *more* objective; (c) a Bayesian strives to attain the best
possible calibration of prior probabilities just as one strives for the
best possible calibration in posterior probabilities, and just because
mathematicians haven't *yet* all agreed on which formal, universal process
to pick for doing this, doesn't change the goal or its importance; (d) one
who seeks to attain the Way strives to calibrate probabilities as best as
possible, to milk every last bit of accuracy and precision from the
guesstimates, and this goal takes precedence over convincing others that
you are using an objective method; and of course (e) Kolmogorov's Razor
makes a fine prior.

Eliezer S. Yudkowsky                
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:48 MDT