From: Eliezer Yudkowsky (firstname.lastname@example.org)
Date: Mon May 24 2004 - 14:13:20 MDT
> On Sun, 23 May 2004 "Eliezer Yudkowsky" <email@example.com> said:
>>I think it might literally take considerably more caution to
>>tweak yourself than it would take to build a Friendly AI
> Why wouldn’t your seed AI run into the same problem when it tries to
> improve itself?
Because I would have designed that mind to handle those problems, in
exactly the way that natural selection did *not* design human beings to
handle those problems. Self-modification prefers a mind designed to handle
self-modification, just as swimming in the core of the sun prefers a body
designed to swim in the core of the sun.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT