From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Dec 16 2002 - 23:10:54 MST
Gary Miller wrote:
> Will a FAI develop a sense of self preservation and self interest? It
> seems prudent from an evolutionary perspective to insure an organism
> does not engage in risky behavior for no reason thereby risking it's
> very existence. Such as radically altering it's own code with out doing
> a backup :)
This one has been pretty exhaustively covered; see:
http://intelligence.org/CFAI/
http://intelligence.org/CFAI/anthro.html
http://intelligence.org/CFAI/design/seed.html
http://intelligence.org/CFAI/info/indexfaq.html
http://intelligence.org/CFAI/info/indexfaq.html#q_2
http://intelligence.org/CFAI/info/indexfaq.html#q_2.3
http://intelligence.org/CFAI/info/indexfaq.html#q_2.13
http://google.com/search?q=subgoal+site:sl4.org
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT