From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Mar 28 2001 - 14:59:37 MST
"Christian L." wrote:
>
> My point is, this goal might be replaced during the AI:s self-modification.
> I have seen some "failure of friendliness" scenarios in FAI, but I haven't
> found one that addresses this. If you have, please point to it.
You should probably be looking under the topic "Seed AI goal systems",
which is the section that deals with issues unique to self-modifying AIs.
http://intelligence.org/CaTAI/friendly/design/seed.html
I'm working on an "Indexed FAQ" that links from commonly asked questions
into the main text; your question is on the list.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT