From: Dimitry Volfson (dvolfson@juno.com)
Date: Mon Mar 17 2008 - 22:02:36 MDT
Matt Mahoney wrote:
> --- Mark Waser <mwaser@cox.net> wrote:
>
>
>>> Getting the re-implementation correct is an intelligence problem. If you
>>> still have any doubts, the nanobots will go into your brain and remove
>>> them.
>>> You will believe whatever you are programmed to believe. If you are
>>> opposed
>>> to being reprogrammed, the nanobots will move some more neurons around to
>>> change your mind about that too. It can't be evil if everyone is in favor
>>> of
>>> it.
>>>
>> Sorry. By my definition, if you alter my beliefs so to subvert my goals,
>> you have performed an evil act.
>>
>
> That's your present perspective. By its perspective, it is bringing you up
> closer to its level of intelligence so that you can see the folly of your
> ways.
>
>
> -- Matt Mahoney, matmahoney@yahoo.com
Absolutely. Even if all it does is talk to you, and that conversation
ends up changing your goals or the priority of your goals, then what it
has done is to "subvert your goals, therefore performing an evil act."
Therefore it must never communicate any information to anyone, in any
way, ever, unless it has no effect on their goals: which means it will
stay silent forever.
But what I believe nobody still understands is why you, Mark Waser,
believe that simply telling a (potential) superintelligence about a
belief system would change it's beliefs? It's like the old science
fiction story where you kill the superintelligent computer by telling it
a riddle it can't solve. Neat idea for a science fiction story, but if
you really think about it, there's no reason for it to work.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT