Re: AGI Prototying Project

From: Russell Wallace (
Date: Tue Feb 22 2005 - 20:32:24 MST

(Got a failure message the first time, so I'm attempting a resend,
sorry if it shows up twice.)

On Wed, 23 Feb 2005 11:15:22 +1100, Tennessee Leeuwenburg
<> wrote:
> I think there is something we disagree about. You seem to be worried
> that Friendliness will *evolve into* a paperclip machine (or
> whatever).

I think a _sufficiently well designed_ FAI can stop itself from doing
that. I think an _ad-hoc, sort-of Friendly_ entity does indeed present
that danger.

In other words, it is not enough that a transhuman entity (whether AI
or uploaded human) be non-malevolent right now; it needs to have a
Friendliness architecture that will remain stable under indefinitely
many iterations of copying and self-improvement (including hacking
attempts, cosmic rays flipping bits of memory etc).

> Well, I think designing in a reasonable sense of self-preservation of
> identity into the AGI and using a reproductive model will mostly take
> care of things. No individual intelligence that values its own
> existence is going to wilfully construct an offspring which will
> guarantee its own demise, unless perhaps it regards its own demise as
> inevitable, and can fulfil some other important goals by doing so. It
> is a short step from self-preservation to morality.

Suppose for the sake of argument this ensures the second and third
generation will still be recognizably Friendly, how do you ensure the
same thing for the millionth generation? (Which may not be all that
long a time coming, when nanotech assembler technology is available.)

In case you think this is all hypothetical, someone said on this very
mailing list awhile back that he himself, if given the chance to
upload, would then optimize himself for maximum reproductive
efficiency - discarding all human values, including sentience, in the

- Russell

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT