Re: FAI: Collective Volition

From: Wei Dai (
Date: Mon Jun 28 2004 - 23:27:00 MDT

On Thu, Jun 03, 2004 at 02:00:58PM -0400, Eliezer Yudkowsky wrote:
> I think that this abuses the term "Bayesian prior", which with regard to AI
> design is not meant to refer to current beliefs, but to the ur-prior, which
> I would expect to be the Principle of Indifference over identifiable
> interchangeable spaces, and for more complex problems Solomonoff induction
> on simple programs and simple conceptual structures.

What does "Solomonoff induction on simple programs and simple conceptual
structures" mean? I already know what Solomonoff induction is, but have
never heard of the qualification "on simple programs and simple conceptual
structures" before.

> And even this
> ur-prior can be refined by checking its parameters against the test of
> complex reasoning, tweaking the hypothesis space to more closely match
> observed reality. I think. I haven't checked the math.

This I don't understand at all. Do you have a reference for this idea, or
a more complete description?

> "Allah exists", "Allah does not exist" is not an appropriate thing to have
> in an ur-prior at all, and anyone programming in arbitrary propositions
> into the ur-prior with a probability of 10^11 or something equally
> ridiculous is playing nitwit games. (And my current understanding of FAI
> design indicates this nitwit game would prove inconsistent under
> reflection.) Any reasonable assignment of ur-priors would let the evidence
> wash away any disagreement. If you can possibly end up fighting over
> ur-priors, you're not just a jerk, you're a non-Bayesian jerk. Ur-priors
> are not arbitrary; they calibrate against reality, like any other map and
> territory.

It would be nice if the ur-prior is not arbitrary, but so far no one has
proposed a single prior that everyone can agree on. If you disagree,
please provide a reference to the prior that we should all use. If you're
thinking of the universal enumerable continuous semimeasure (denoted M in
the book "An Introduction to Kolmogorov Complexity and Its Applications"),
which the reference to Solomonoff induction seems to suggest,
unfortunately there are an infinite number of them, one for each universal
Turing machine. And even worse, as I mentioned in the thread "escape from
simulation", it completely ignores the possibility that the universe is
uncomputable, essentially assigning a prior probability of zero to this

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT