Re: [sl4] Rolf's gambit revisited

From: Gwern Branwen (gwern0@gmail.com)
Date: Thu Jan 01 2009 - 11:26:00 MST


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

On Thu, Jan 1, 2009 at 12:39 PM, Matt Mahoney wrote:
> --- On Wed, 12/31/08, Norman Noman wrote:
>
>> (rolf's gambit is a method for cooperation between powers in different
>> worldlines via mutual simulation, gone over in detail in this earlier sl4
>> thread: http://www.sl4.org/archive/0708/16600.html)
>>
>> (if you haven't read that thread, you really should, it's probably the most
>> interesting thing to ever come out of this list)
>>
>> I was thinking today about a puzzle. Let's say you're a friendly AI, and
>> you're going to enact rolf's gambit. But before you do that, you take over a
>> few solar systems, and you discover obvious proof that your world is a
>> simulation. For the sake of argument, let's say it's an indestructible tuba
>> on the third moon of saturn.
>>
>> The question is this: assuming you continue with rolf's gambit, do you
>> include the tuba in your subsimulations? Why or why not?
>
> First, there is no such thing as indisputable proof. There is only belief. If an AI believes the universe is real or simulated, it is because it was programmed that way (on purpose or by accident). If the two cases are indistinguishable, then belief one way or the other should have no effect on rational behavior because it does not affect expected utility. In particular, there is no difference from the viewpoint of either AI or humanity between a real world AI wiping out real world humanity and a simulated AI wiping out simulated humanity.
>
> But let's say for the sake of argument that it does matter.
>
> An AI running in a simulation cannot know anything about the AI that is simulating it. Any Turing machine can simulate any other Turing machine. The simulated AI might believe otherwise, but only because the simulating AI biased it toward those beliefs.
....
> -- Matt Mahoney, matmahoney@yahoo.com

Perhaps this is nitpicking, but I disagree. Any Universal Turing
Machine can, by definition, simulate any Turing machine, but that is a
formalism that may not necessarily obtain. I believe one can learn a
fair bit about the simulator. Here's an example. Suppose I am in a
simulation, and I begin counting upwards. 1,2,3...

What have I learned when I reach 2? That I am not being simulated by
the simplest possible Turing machine, as the busy beaver for that is
1. What have I learned when I reach 7? That I am not being simulated
by anything as weak as a 2-state 2-symbol Turing machine. What have I
learned when I reach 15? ...3-state 2-symbol... What have I learned
when I reach 47,176,871? That I am not being 5-state 2-symbol Turing
machine.

- --
gwern
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEAREKAAYFAkldCqcACgkQvpDo5Pfl1oJz3QCfXZey4ApcZ9W6xoSlidg+72Xj
04AAoIXhM9rEntm3aESOj0Yrt/xR4x1x
=4q+y
-----END PGP SIGNATURE-----



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT