Re: [sl4] Calculating the probability of immortality

From: Stuart Armstrong (dragondreaming@googlemail.com)
Date: Mon Dec 08 2008 - 08:31:01 MST


Interesting, but he makes the traditional mistake:

"we'll just design our robot with this imperative so deeply placed
into its programming that it gives survival precedence over all other
things. This seems to work fine in biological organisms, which
generally have a strong aversion to self death. But for our purposes
it is
insufficient. Because the robot is intelligent, it is smart enough to
reverse its own engineering and perceive this trick we've played upon
it. What is to stop it from replacing that bit of programming with
something else?"

Because it is programmed to give survival precedence over all other
things, and rewriting its programming that way does not increase
survival. In fact, it would have a much stronger aversion to death
than us poor biological things; tortured for fifty years, blinded and
mutilated, it would still put survival above all other priorities.

2008/12/8 Daniel Yokomizo <daniel.yokomizo@gmail.com>:
> Hi,
>
> The paper: http://arxiv.org/abs/0812.0644
> Arxivblog post about it: http://arxivblog.com/?p=749
>
> The paper seems to be interesting (I read a few sections and skimmed
> the rest). Section 5 'The Problem of Self-Prediction' is of particular
> interest to this list, as it tries to tackle issues related to
> constant goals (in the paper it's survival 'instinct') for
> self-modifying agents.
>
> Best regards,
> Daniel Yokomizo
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT