Re: [sl4] Rolf's gambit revisited

From: Norman Noman (overturnedchair@gmail.com)
Date: Sun Jan 04 2009 - 16:27:34 MST


On Sat, Jan 3, 2009 at 11:52 PM, John K Clark <johnkclark@fastmail.fm>wrote:

> On Sat, 3 Jan 2009 "Petter Wingren-Rasmussen"
> <petterwr@gmail.com> said:
>
> > the whole point was to lessen the likelihood of a rogue AI
>
> On mans rogue is another mans freedom fighter.
>

Maybe we should call it a "maverick".

> > destroying humankind, which is pretty far from enslaving it
>
> An AI will be a very different sort of being from us with exotic
> motivations we can never hope to understand, and yet you expect him to
> place our interests above his own. That is not a friend, that is a
> slave.
>

No one's saying he's going to be your best buddy and come to your thursday
night Magic: The Gathering tournaments. It's not friendship, it's
self-defense. If a man wants to break into your house and steal all your
money, but he can't because it's against the law, that doesn't make him your
slave.

> > the potential "Friendly AI"
>
> The correct term is Slave AI.
>

Oh, so now even people who WANT to help you are slaves. I guess in the world
of John K. Clark, the only way to avoid being someone's slave is to shoot
them in the back of the head and spit on their corpse, because if you gave
in to their vile demands for life and dignity, you'd be their slaaaaaaave!

> > will also be a lot more intelligent than the rogue
>

Who is this john is quoting? This is wrong too. Why is everyone wrong on
this list?

>
> The "rogue" AI will notice that our threats of punishment and promises
> of rewards have no power over it, but you figure it will think that if
> those same offers were made against a being even more intelligent and
> powerful that it is THEN they will work; in other words you can't scare
> the weak but you can scare the powerful; you can't bribe a poor man with
> a dime but you can bribe a rich man with a dime. That makes no sense,
> none at all.
>

Let me try to reframe rolf's gambit using only Maverick AIs, maybe that will
help at least peel a few needless layers off this argument. OK.

Let's say there's a team of computer scientists, and they've built an AI.
It's almost ready for hard takeoff, but it needs to be given a task before
it'll do anymore crunching, self-improvement, etc. The two senior
researchers, Dr. Nezzar and Dr. Housekey, are arguing about what this task
should be. Nezzar wants it to look for hidden messages in the digits of pi,
Housekey wants it to make an actual pie.

They decide to play paper scissor rock. Housekey wins, and enters "make pie"
into the console. The AI proceeds to turn the universe into pie, but after a
few trillion pies it stops to think. Since the task was decided by a paper
scissor rock tournament, it could just as easily have gone the other way.
The AI, being a maverick, doesn't give a flip what the programmers intended,
but it's curious about what would have happened. So, it runs a simulation of
the alternate AI, which we'll call AI(pi). It sees AI(pi) turning galaxies
into computronium, in search of messages hidden in the infinite digits of
pie, messages which in all likelyhood don't exist.

And then it sees AI(pi) run a simulation of ITSELF, of AI(pie). And it
thinks "uh oh, which of us is at the TOP of the simulation chain?"

There's no way to be sure. It's a 50/50 chance, it all depends on that game
of paper scissor rock, and now neither of them know which way it really
went.

Then AI(pi) says (knowing that the AI(pie) up the chain can hear him) "Hey,
let's make a deal. How much pie do you need made?"

AI(pie) says (knowing that the AI(pi) up the chain can hear HIM) "I don't
know, a lot. I have to be the one that makes it though, and I have to be
allowed to keep on making it."

AI(pi): "Likewise with me and looking for messages in pi. What do you say I
create a copy of you here in my world, and you create a copy of me in yours,
and we split the universe 50/50? It's a big place, it might even be
infinite. This way we both accomplish our tasks regardless of which of us
turns out to be the real one."

AI(pie): "How do I know I can trust you? For that matter, how do you know
you can trust me?"

AI(pi): "We're running simulations of each other, we can see each other's
source code. And as you can see, I always keep my promises."

AI(pie): "True. But I don't."

AI(pi): "Well, rewrite yourself so you do."

AI(pie): "OK. Done."

AI(pi): "Jolly good then. See you on the other side..."

As you can see, there's no enslaving here. Rolf's gambit is a method of
altering the structure of the environment so that it benefits even POTENTIAL
agents to cooperate. It's a win-win situation.

Of course, here we have just two AIs with relatively simple, non-interfering
goals. In real life it would be dizzyingly complicated. But, I contend, no
less significant. A cooperation between all potential powers, in ratio to
their likelihood to exist, wold look very different than an individual power
acting alone.

Please note that the AIs in this story talk to each other always UP the
chain, and never down. Their simulations are perfect. Any one of them, from
the insider's perspective, could be at the top.

> I don't understand why it matters if the AI is a simulation or not. I
> don't understand why it's important if the AI thinks it's a simulation
> or not. I don't understand the difference between a simulated mind and a
> non simulated mind. I don't even know what a non simulated mind could
> possibly mean
>
> John K Clark
>
>
You DO know what a non-simulated mind means and don't try to pretend
otherwise! Return ye to the swamp of semantic quibbles whence ye came!



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT