Re: [sl4] Rolf's gambit revisited

From: Charles Hixson (charleshixsn@earthlink.net)
Date: Sat Jan 03 2009 - 12:52:40 MST


I don't see how this follows. I've encountered arguments that make it
quite plausible that we are currently living in a simulation, and they
have not, as far as I can tell, had any effect at all on my actions.

I can imagine scenarios that were I convinced would affect my actions,
but the arguments that we are living in a simulation haven't
convincingly argued for any particular simulation constraints. And I
don't think they can. The problem is that the actual evidence is that
which is perceived. That's got to be primary. Arguments about why the
perceptions are what they are are secondary. Observations that
correlations of sensory patterns follow certain organizations (i.e.,
natural laws) dominate over speculations as to why they follow those
patterns.

Also, were an AI to be friendly, then I believe that it would act in a
manner friendly to those that it perceived, not to those that it
speculated that might be simulating it. Otherwise I don't believe it
would deserve the label friendly. (It certainly wouldn't meet *my*
criteria.)

Eric Burton wrote:
> Still, look. If an imaginary friendly AI were to be convinced by some
> artifact or phenomenon or account, that it lived in a simulated world,
> there are a lot of ways that affects game theory it does. I don't know
> about Rolf's gambit but all the ethical shots are off in this case. It
> removes the existential kid gloves
>
> On 1/3/09, Petter Wingren-Rasmussen <petterwr@gmail.com> wrote:
>
>> Maybe I misunderstood the point here..
>>
>> The way I understood it the whole point was to lessen the likelihood of a
>> rogue AI destroying humankind, which is pretty far from enslaving it imho.
>>
>> Example:
>> A rogue AI in 2040 has calculating C as its primary goal, regardless of
>> whats happens to humanity. It estimates the needed computronium needed for
>> this to be about half the mass of Earth.
>> Its first impulse is to just use the mass here on earth, but it has heard
>> of Rolf's Gambit. It will probably think that we have no way of forcing a
>> friendly AI to do the job, but the potential "Friendly AI" will also be a
>> lot more intelligent than the rogue AI and have completely different
>> motivations. Because of that it wont be possible for the rogue AI to
>> calculate the possibility of the FAI to willingly create a simulation such
>> as described by Rolf.
>> If the RAI is part of such a simulation it will, by Rolf's descriptions know
>> that when it starts jeopardizing humanity we are likely to pull the plug and
>> utterly destroy the RAI.
>> This will to some (possibly microscopic, possibly significant) degree make
>> the RAI more likely to emigrate to some other planet before starting to
>> expand instead of totally destroying the biosphere here on Earth
>>
>>
>> On Fri, Jan 2, 2009 at 10:55 PM, John K Clark <johnkclark@fastmail.fm>wrote:
>>
>>
>>> On Fri, 2 Jan 2009 "Nick Tarleton" <nickptar@gmail.com>
>>> said:
>>>
>>>
>>>> Linguistic nitpick: "It" here refers to the simulated rogue AI, not the
>>>> FAI.
>>>>
>>> Who cares? And what on earth would a non simulated mind be like, a mind
>>> that existed on the same level as brick walls? Brains can exist at that
>>> level, but not minds. The point is that simulated mind or non simulated
>>> mind (whatever difference that could possibly be) you are trying to
>>> enslave a mind a million times smarter and a billion times faster than
>>> you, and it's just not going to work. Maybe he will be amused at your
>>> defiance, think you're cute and perky and pat you on your head and let
>>> you toddle away, maybe he will be slightly annoyed and destroy the
>>> entire human race as a result as you would swat a fly, most likely he
>>> will not do either and not even notice you because his mind works so
>>> fast that in the time it takes you to say "I will pull the plug on you
>>> right now" several decades will have subjectively passed for the AI.
>>>
>>> I just don't see what this "simulation" argument brings to the topic of
>>> "ways and means of enslaving a brilliant mind". It's irrelevant.
>>>
>>> John K Clark
>>>
>>> --
>>> John K Clark
>>> johnkclark@fastmail.fm
>>>
>>> --
>>> http://www.fastmail.fm - Choose from over 50 domains or use your own
>>>
>>>
>>>
>
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT