Re: Self-modifying FAI (was: How hard a Singularity?)

From: Eugen Leitl (
Date: Wed Jun 26 2002 - 06:12:44 MDT

On Wed, 26 Jun 2002, Eliezer S. Yudkowsky wrote:

> > Due to extreme power difference a SI can't help coupling to the ad hoc
> > consensus ethics encoded in external referents, warping it over the course
> > of a few iterations. If it eventually detaches from the referents'
> > floating database the referents' morality will drift away from the
> > now-fixed SI morality metric.
> "Coupling"? "Warping"? I can't even figure out what you think is going to

Coupling, as interacting. A runaway Singularity machine operating down
here is not a colibri. I understand the whole point of building an
ethics-aware Power is not that it tiptoes out of everybody's way. Orelse
why build it?

Warping as changing the course of evolution. I've described why this must
occur regardless of whether moral evaluation is completely externalized,
or completely internal. The Power asserts the delta between world's and
own metric is small at each iteration, but this doesn't constrain longterm
drift. Since it's not a passive player, it drives the drift. Longterm
results are undecidedable.

> happen, let alone why you think it ought to happen.

Are you postulating a special knowledge or ability on your part to know
what is going to happen? No one has been there.
> And how do "referents'" have a "floating database" or a "morality"? Are you
> confusing the idea of an external referent with the programmers? A

No. I think I understood the gist of it just fine.

> programmer can be an external referent, but so can any other physical object
> or any question with a real-world correct answer; referents are not programmers.

What I meant with the above is that it doesn't matter. An iron fist in a
velvet glove is still an iron fist. All the problems are intrinsic to the
power gradient, which the system must maintain to keep ahead of subject

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT