Re: ESSAY: How to deter a rogue AI by using your first-mover advantage

From: Stathis Papaioannou (stathisp@gmail.com)
Date: Fri Aug 31 2007 - 21:54:18 MDT


On 01/09/2007, Vladimir Nesov <robotact@mail.ru> wrote:
> Tuesday, August 28, 2007, Stathis Papaioannou wrote:
>
> SP> By TM enumerator I take it you mean a program that enumerates all
> SP> possible programs, like a universal dovetailer. In the sense I have
> SP> described, then yes, all the other simulations are irrelevant.
> SP> However, where there are multiple competing futures (as below) the
> SP> weighting of each one matters. There are theories in which it is
> SP> assumed that the universe is the set of all possible programs (which
> SP> perhaps need only exist as Platonic objects), but I don't know if it
> SP> has been successfully shown that this idea yields the known laws of
> SP> physics.
>
> It yields all laws of physics, including ours, as long as they are
> computable. (It doesn't seem possible to ever prove as a result of
> observations that some laws of physics are not computable. Observations
> are finite. When a decision is drawn by experts, it's equivalent to
> experts' minds being is a particular configuration set, which is also
> a finite thing.)

Of course: but if all computable universes are in fact computed, is
there a reason to think that you will continue to find yourself in the
orderly sort of universe you remember, rather than experiencing your
computer turning into a fire-breathing dragon in the next moment,
which is also a computable universe? This has been called by John
Leslie the "failure of induction" problem with ensemble theories. Here
is one paper addressing this problem, arguing that we should expect to
find ourselves in a universe with the least information content:

http://parallel.hpc.unsw.edu.au/rks/docs/occam/occam.html

> >> SP> However, if there are two or more competing "next moments" then the
> >> SP> number of simulations is relevant. If there are X simulations in which
> >> SP> you are tortured and Y simulations in which you are not tortured in
> >> SP> the next moment, then you have a X/(X+Y) chance of being tortured.
>
> I think I found a better argument about this point. Certainly one
> tries to anticipate the future, but this behaviour is grounded in
> anticipation of _future experience_. And future experience itself
> does not depend on number of times it's simulated.
>
> When you use probability theory to make rational choices, you do it
> only because you anticipate that they will pay off in your future
> experience, in dominating bulk of possible futures. Still, you usually
> sacrifice those possible futures where fate plays against you.

So what is your expectation of being tortured in the example above?
Wouldn't you want to decrease X relative to Y?

> That's not what I meant, but details don't really matter. This
> counting issue raises just another serious problem of simulations.
> What really counts as simulation of certain mathematical model of
> simulated universe? Any implementation arranges matter of host
> universe in certain patterns. Why some patterns are said to provide
> simulations and not others? Matter of host universe has no direct
> correspondence to 'matter' of simulated universe. To establish that
> implementation X (particular pattern of matter in host universe) is
> a simulation of universe model Y (mathematical description), one needs
> an interpretation procedure F that can take X as an input, convert it to
> the same mathemetical notation and compare to Y, F(X)=Y. Presence of this
> procedure (which nobody needs to actually build in order to simulation
> to be a genuine one) is somehow implied, if X is developed to
> implement Y. But how complex is F allowed to be? If it doesn't need to
> be implemented, can't it include whole simulation, so that X is nil
> and F(nil)=Y?
>
> As a simple example, say, state of simulated universe is a finite 2D binary
> image, of size AxB. When is it considered simulated? If a program
> stores this state in computer memory and performs computation that
> modifies it every simulated tick according to simulation's laws of physics, and
> outputs the image to a monitor screen, it seems to simulate that
> universe. But will it cease to simulate it if I turn monitor off?
> Will it simulate it twice if I install two monitors in parallel?
> It's only meaningful to say that implementation provides a way to
> access information about simulated universe.

This is a famous problem in functionalist theories of mind, examined
by (to give a partial list) Hilary Putnam, John Searle, Greg Egan and
David Chalmers. For example see this paper:

http://consc.net/papers/rock.html

My rationalization is that mind exists as an abstract Platonic object,
with computers and brains being concrete examples of mind in the way a
projectile moving under gravity is a concrete example of a parabola.
This reverses the usual supervenience relationship between the
physical and the mental. It's weird, but the alternative would seem to
be to drop functionalism and say that there is some fundamentally
non-computational process in the brain which generates consciousness.

> I'm mainly interested in this issue because I have doubts about
> uploads not being p-zombies. These handy-wavy theories of simulated
> experience are full of paradoxes. I agree that one can't in principle
> prove that given observed entity has conscience, but at least there
> should be a consistent theory of what conscience is. In this case, I
> take a universe containing a conscious observer as a consciousness
> vessel, so that genuine simulation corresponds to implementation of
> consciousness.

Are you aware of this important paper showing that *if* brain physics
is computable *then* a computer emulation of the brain will reproduce
the brain's consciousness as well as its external behaviour:

http://consc.net/papers/qualia.html

-- 
Stathis Papaioannou


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT