Re: [sl4] Rolf's gambit revisited

From: Norman Noman (overturnedchair@gmail.com)
Date: Mon Jan 12 2009 - 02:12:00 MST


On Tue, Jan 6, 2009 at 9:47 AM, Benja Fallenstein <
benja.fallenstein@gmail.com> wrote:

> Hi all,
>
> (New here. *waves*)
>
> Peter de Blanc wrote:
> > Mike Dougherty wrote:
> >> Peter de Blanc wrote:
> >>> Matt Mahoney wrote:
> >>>
> >>>> False. If X simulates Y, then K(X) > K(Y) because X has an exact model
> >>>> of the mental state of Y. This implies that Y cannot also simulate X
> >>>> because it would require K(Y) > K(X).
> >>>
> >>> Matt, please stop posting pseudomathematics.
> >>
> >> Seriously? [...]
> >
> > My objection is to the statement "If X simulates Y, then K(X) > K(Y)."
> > There's no such theorem. For example, you could write a program which
> > simulates every possible program. This program would have some fixed
> > complexity K, but since it simulates every program, it will simulate some
> > with complexity >K.
>
> Matt was replying to a scenario where the two AIs know each other's
> source code, and claiming, as I understand, that they cannot possibly
> conduct a dialog by simulating each other. IMHO that makes it clear
> that his statement meant something more specific than you're making it
> out to be.
>
> Matt, as far as I can see you're wrong, though; K(X) = K(Y), no
> contradiction. Yes, if X "did something besides simulating Y," in a
> certain sense, then you would have K(X) > K(Y). But by assumption,
> each AI simulates (eventually) everything the other AI does, so in
> that sense, Y "does everything" that X does and vice versa. To see
> that you can have Xs and Ys with such source codes: Let's say that you
> have X' that takes the source of Y as input and behaves like you want
> X to behave, and, symmetrically, you have Y' that thakes the source of
> X as input. Then it seems to be it's basically just an exercise in
> writing quines to obtain X (using both X' and Y') and to obtain Y
> (again using both X' and Y'), with K(X) = K(Y).
>
> I don't want to try my hand at doing this with Turing machines, but
> let's try it with computable functions.
>
> I'll assume you know how to encode a tuple of natural numbers as a
> natural number. Let eval : N -> (N -> N) be a Gödel numbering of the
> computable functions. Let an *agent* be a computable function whose
> input is interpreted to be (some information received from the
> environment at the first time step), and whose output is a pair of (an
> action to take in the environment at the first time step) and (the
> Gödel number, relative to 'eval', of the agent to use at the *next*
> time-step).
>
> Let an *environment* be a computable function whose input is some
> agent's action and whose output is a pair of the information to send
> to the agent in the next time step, and the environment to use in the
> next time step. (Incidentally, note that the definitions of agent and
> environment are precisely parallel, but we won't be using that.) It is
> a simple exercise in coding to write a function 'interact' that, given
>
> * the Gödel number of an agent,
> * the input to give to the agent at the first time step, and
> * the Gödel number of an environment
>
> computes the infinite sequence of inputs from the environment and
> actions from the agent:
>
> interact(agent, inp, env, 0) = (inp, act) where (act, agent') =
> eval(agent)(inp)
> interact(agent, inp, env, n+1) = interact(agent', inp', env', n)
> where (act, agent') = eval(agent)(inp); (inp', env') = eval(env)(act)
>
> The last parameter of 'interact' is the index of the infinite sequence.
>
> Let's define a formalism for two agents in different environments
> which can pass messages to each other, and then use the trick with
> quines to transform them into single agents simulating each other to
> conduct a dialog roughly in the sense of Rolf's gambit.
>
> The idea behind the formalism is that at each time step, each agent
> outputs a message to send to the other agent as part of its action in
> the environment, and at the next time step, the other agent receives
> this message as part of its input. So the formalism is simply that the
> input from the environment becomes a pair (message from other agent,
> rest of the input from the environment) and the action becomes a pair
> (message to other agent, rest of the action in the environment).
>
> It is then a simple exercise in coding to write a function
> 'interactMsg' that, given
>
> * the Gödel numbers of two interacting agents,
> * the inputs to give to the agents on their first time steps,
> * the messages to pass to the agents on their first time steps
> (usually just 0), and
> * the environments of the two agents (in the original sense of environment)
>
> computes the infinite sequence of, for each time step, the two inputs
> from the environments, the two messages sent by the agents, and the
> two actions taken in their respective environments:
>
> interactMsg(ag1, ag2, i1, i2, m1, m2, e1, e2, 0) = (i1,i2,m1,m2,a1,a2)
> where
> ((m1',a1),ag1') = eval(ag1)(m1,i1)
> ((m2',a2),ag2') = eval(ag2)(m2,i2)
>
> interactMsg(ag1, ag2, i1, i2, m1, m2, e1, e2, n+1)
> = interactMsg(ag1', ag2', i1', i2', m1', m2', e1', e2', n) where
> ((m1',a1),ag1') = eval(ag1)(m1,i1); (i1',e1') = eval(e1)(a1)
> ((m2',a2),ag2') = eval(ag2)(m2,i2); (i2',e2') = eval(e2)(a2)
>
> Again, the last parameter is the time step.
>
> Let us say that ag1,ag2 are such message-passing agents, i1,i2 are
> inputs, and e1,e2 are environments. The point of this message is that
> we can construct an agent ag1* such that for all n,
>
> interact(ag1*, i1, n) = (i1', a1') where
> (i1',i2',m1',m2',a1',a2') = interactMsg(a1,a2,i1,i2,0,0,e1,e2,n)
>
> Since everything is symmetric, this means that we can also construct a
> similar agent ag2*, and we have ag1* and ag2* in their alternative
> possible worlds "conducting a dialog by simulating each other." Now,
> ag1 even when running as ag1* doesn't have power over ag2's
> environment, so this doesn't quite suffice for Rolf's gambit, but I
> can't be bothered to extend it more right now; in any case, if we
> informally specify that ag2 cares about converting the *real*
> environment to cheesecake (not pie!), and that it has a probability
> distribution over whether ag1* or ag2* is the actual real situation,
> then it seems to me that we do have a working variant of the gambit.
>
> Onwards to the construction of ag1*. Let e be the Gödel number of a
> function from a pair of numbers to a natural number; define apply(e,n)
> to be the Gödel number of
>
> f(n') = eval(e)(n,n')
>
> That is, apply(e,n) is a partial application operator:
>
> eval(apply(e,n))(n') = eval(e)(n,n')
>
> Now, let
>
> Q(q, ((ag1,ag2,i2,m1,m2,e2), i1)) = (a1,ag1*') where
> ((m1',a1),ag1') = eval(ag1)(m1,i1)
> ((m2',a2),ag2') = eval(ag2)(m2,i2)
> (i2',e2') = eval(e2)(a2)
> ag1*' = apply(q, (ag1',ag2',i2',m1',m2',e2'))
>
> By a corollary to Kleene's recursion theorem
>
>
> http://en.wikipedia.org/wiki/Kleene%27s_recursion_theorem#Application_to_Quines
>
> there is a particular number q such that
>
> eval(q)(x) = Q(q,x)
>
> Then, let
>
> ag1* = apply(q, (ag1,ag2,i2,0,0,e2))
>
> We now use a boring induction to show the desired result, i.e. that
> with these definitions,
>
> interact(ag1*, i1, n) = (i1', a1') where
> (i1',i2',m1',m2',a1',a2') = interactMsg(a1,a2,i1,i2,0,0,e1,e2,n)
>

Good lord, it doesn't have to be nearly this complicated. Here are some
easier solutions:

AI(pi) at time step 10000 is simulating AI(pie) at time step 99000, who is
simulating AI(pi) at time step 98000, etc. As time advances and they eat
more planets they grow, and so they have no trouble simulating earlier,
smaller versions of each other and themselves.

OR

They simulate each other not just with a time lag, but actually slower than
real time. In this case, the simulations can be carried on forever without
eating significantly into the AI's processing capacity.

OR

They simply cut some corners! one simulated universe is full of pie, the
other is full of computronium sifting pointlessly through endless random
digits. In both cases, you can simply not simulate that part, and say you
did, and spare yourself a lot of needless work.

This is what annoys me about technical explanations...



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT