From: Petter Wingren-Rasmussen (petterwr@gmail.com)
Date: Sat Jan 03 2009 - 07:45:10 MST
Maybe I misunderstood the point here..
The way I understood it the whole point was to lessen the likelihood of a
rogue AI destroying humankind, which is pretty far from enslaving it imho.
Example:
A rogue AI in 2040 has calculating C as its primary goal, regardless of
whats happens to humanity. It estimates the needed computronium needed for
this to be about half the mass of Earth.
Its first impulse is to just use the mass here on earth, but it has heard
of Rolf's Gambit. It will probably think that we have no way of forcing a
friendly AI to do the job, but the potential "Friendly AI" will also be a
lot more intelligent than the rogue AI and have completely different
motivations. Because of that it wont be possible for the rogue AI to
calculate the possibility of the FAI to willingly create a simulation such
as described by Rolf.
If the RAI is part of such a simulation it will, by Rolf's descriptions know
that when it starts jeopardizing humanity we are likely to pull the plug and
utterly destroy the RAI.
This will to some (possibly microscopic, possibly significant) degree make
the RAI more likely to emigrate to some other planet before starting to
expand instead of totally destroying the biosphere here on Earth
On Fri, Jan 2, 2009 at 10:55 PM, John K Clark <johnkclark@fastmail.fm>wrote:
> On Fri, 2 Jan 2009 "Nick Tarleton" <nickptar@gmail.com>
> said:
>
> > Linguistic nitpick: "It" here refers to the simulated rogue AI, not the
> > FAI.
>
> Who cares? And what on earth would a non simulated mind be like, a mind
> that existed on the same level as brick walls? Brains can exist at that
> level, but not minds. The point is that simulated mind or non simulated
> mind (whatever difference that could possibly be) you are trying to
> enslave a mind a million times smarter and a billion times faster than
> you, and it's just not going to work. Maybe he will be amused at your
> defiance, think you're cute and perky and pat you on your head and let
> you toddle away, maybe he will be slightly annoyed and destroy the
> entire human race as a result as you would swat a fly, most likely he
> will not do either and not even notice you because his mind works so
> fast that in the time it takes you to say "I will pull the plug on you
> right now" several decades will have subjectively passed for the AI.
>
> I just don't see what this "simulation" argument brings to the topic of
> "ways and means of enslaving a brilliant mind". It's irrelevant.
>
> John K Clark
>
> --
> John K Clark
> johnkclark@fastmail.fm
>
> --
> http://www.fastmail.fm - Choose from over 50 domains or use your own
>
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT