RE: escape from simulation

From: Ben Goertzel (ben@goertzel.org)
Date: Mon Mar 29 2004 - 07:25:50 MST


> One appealing answer to this question of the prior is to
> define the prior probability of a possible universe being
> base reality as the inverse of the complexity of its laws of
> physics. This could be formalized as P(X) =
> n^-K(X) where X is a possible universe, n is the size of the
> alphabet of the language of a formal set theory, and K(X) is
> length of the shortest definition in this language of a set
> isomorphic to X. (Those of you familiar with algorithmic
> complexity theory might notice that K(X) is just a
> generalization of algorithmic complexity, to sets, and to
> non-constructive descriptions. The reason for this
> generalization is to
> avoid assuming that base reality must be discrete and computable.)

But of course, using this method, for each formal set theory, you can
only distinguish countably many different possible universes -- even
though some of these may be "uncomputable" according to the
computational model adopted.

> The lack of a objective criteria for choosing a formal set
> theory for this purpose leads me to wonder if perhaps the
> choice of a prior is a subjective one, similar to the
> "choice" of a supergoal in the presumed absence of objective
> morality. In case it is, shouldn't we try to answer this
> question before building an SI?

Clearly, the notion of "base reality" as an objective entity is
ill-formes.

Rather, all we have is "apparently base reality, based on the
perceptions and cognitions of mind M, or of minds in class M."

A mind M-1 with greater capability may be able to detect that M's "base
reality" isn't really a "base reality" at all.

One could surely prove that, for any mind, there are some possible
simulations it could be living in, where it could never detect it was in
a simulation -- yet an abler mind could. This would be yet another
Godel theorem varient.

I agree that there is no way to "objectively" choose a prior over the
space of possible universes. This is essentially the problem at the
heart of the Bayesian approach to induction (in the general, Hume-ean
sense). You need a prior distribution on hypothesis space (in this
case, hypotheses about which universe exists).

One approach that's been discussed on this list a lot is algorithmic
information, the Solomonoff-Levin measure, etc. However, this depends
on the base computational model.

One approach here, following Hume, is to take "human nature" as a base
computational model -- so that prior probability becomes "simplicity to
the human mind." Or, taking a page from Eliezer's notion of
humane-ness, perhaps "simplicity to some sort of idealized collective
human mind." But I don't find this very satisfactory.

I'm happier applying the human intuition for simplicity to the *choice
of computational model*. Hence, I prefer a base computational model
involving very simple computational operations, such as the S and K
combinator.... See e.g.

http://homepages.cwi.nl/~tromp/cl/CL.pdf
 
for an apparently very-close-to-minimal formulation of universal
computation.

As it happens, this ties in with Novamente AI, since our system uses
combinatory logic as part of its knowledge representation.

-- Ben G

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT