From: Matt Mahoney (matmahoney@yahoo.com)
Date: Sat Jan 03 2009 - 07:23:01 MST
--- On Sat, 1/3/09, Eric Burton <brilanon@gmail.com> wrote:
> It seems to me that Matt Mahoney is saying there is no indisputable
> proof available that the tuba is indestructible. Its destructibility
> ratio to regular tubas would have a proportionate impact on the AI's
> tendency to think it was in a simulation according to best guess
> estimations of the statistics behind the methods it employed
My objection is to the original Rolf's Gambit. My objection has nothing to do with whether the FAI finds proof (or just believes) that it is running in a simulation. My objection is that the RAI, if it is rational, can not make any assumptions about the system that might or might not be simulating it, regardless of any suggestions that were made to the RAI's input. If it were possible to make the RAI believe that its goals were contingent on some other goals of the system that simulated it, then it would be just as easy to program those other goals into the RAI. Why do it the hard way?
Also, I agree with John Clark's objections. The whole thing is a fantasy.
-- Matt Mahoney, matmahoney@yahoo.com
> On 1/2/09, John K Clark <johnkclark@fastmail.fm>
> wrote:
> > On Fri, 2 Jan 2009 "Nick Tarleton"
> <nickptar@gmail.com>
> > said:
> >
> >> Linguistic nitpick: "It" here refers to
> the simulated rogue AI, not the
> >> FAI.
> >
> > Who cares? And what on earth would a non simulated
> mind be like, a mind
> > that existed on the same level as brick walls? Brains
> can exist at that
> > level, but not minds. The point is that simulated mind
> or non simulated
> > mind (whatever difference that could possibly be) you
> are trying to
> > enslave a mind a million times smarter and a billion
> times faster than
> > you, and it's just not going to work. Maybe he
> will be amused at your
> > defiance, think you're cute and perky and pat you
> on your head and let
> > you toddle away, maybe he will be slightly annoyed and
> destroy the
> > entire human race as a result as you would swat a fly,
> most likely he
> > will not do either and not even notice you because his
> mind works so
> > fast that in the time it takes you to say "I will
> pull the plug on you
> > right now" several decades will have subjectively
> passed for the AI.
> >
> > I just don't see what this "simulation"
> argument brings to the topic of
> > "ways and means of enslaving a brilliant
> mind". It's irrelevant.
> >
> > John K Clark
> >
> > --
> > John K Clark
> > johnkclark@fastmail.fm
> >
> > --
> > http://www.fastmail.fm - Choose from over 50 domains
> or use your own
> >
> >
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT