Re: Self-modifying FAI (was: How hard a Singularity?)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Jun 26 2002 - 05:33:51 MDT


Eugen Leitl wrote:
> On Wed, 26 Jun 2002, Stephen Reed wrote:
>
>>I understand from CFAI that one grounds the concept of Friendliness in
>>external referents - that the Seed AI attempts to model with
>>increasing fidelity. So the evolving Seed AI becomes more friendly as
>>it reads more, experiments more and discovers more about what
>>friendliness actually is. For Cyc, friendliness would not be an
>>implementation term (e.g. some piece of code that can be replaced),
>>but be a rich symbolic representation of something in the real world
>>to be sensed directly or indirectly.
>
> Due to extreme power difference a SI can't help coupling to the ad hoc
> consensus ethics encoded in external referents, warping it over the course
> of a few iterations. If it eventually detaches from the referents'
> floating database the referents' morality will drift away from the
> now-fixed SI morality metric.

"Coupling"? "Warping"? I can't even figure out what you think is going to
happen, let alone why you think it ought to happen.

And how do "referents'" have a "floating database" or a "morality"? Are you
confusing the idea of an external referent with the programmers? A
programmer can be an external referent, but so can any other physical object
or any question with a real-world correct answer; referents are not programmers.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT