From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Mar 28 2004 - 14:00:48 MST
Wei Dai wrote:
> Suppose the SI never finds any evidence that it is living in a
> less-than-perfect simulation. It still cannot rule out the possibility
> that it is living in a perfect simulation. Whether or not it continues to
> pursue escape as its main subgoal and at which point it gives up depend on
> the a priori probability that it assigns to the possibility of being in a
> simulation. But where does this prior come from? I'm curious if anyone
> else has thought about this problem.
I have, and I also blew up at the problem of finding the prior
probability. Not really helpful, but there ya go.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT