From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Jun 28 2001 - 21:40:55 MDT
Jack Richardson wrote:
>
> In this way, humans could choose to be an
> active participant in the transition to transhuman experience. A
> friendly AI could be guided to include this goal as a key outcome of its
> primary activity.
That's not how Friendly AI works. If two-way interaction with a human is
necessary to grow up, then a Friendly AI might do so; if not, no amount of
nagging will make it happen. A mature Friendly AI is an independent
altruist operating within the human frame of reference, not a chattel.
So, for example, you can't put an "Easter Egg" in the "goal suggestions"
that say, "Just before the Singularity, broadcast the voice of John Cleese
saying 'And now for something completely different.'" Trying to do this
has exactly the same effect as a programmer, or a random human fresh off
the street, saying "I think it'd be really funny if, just before the
Singularity, we hear the calm and assured voice of John Cleese saying 'And
now for something completely different.'" If the John Cleese thing is
something that people want, it will happen; if not, not; making the
specific suggestion probably doesn't make much of a difference. Likewise
for an attempt to do a Merged Ascent or Synchronized Singularity.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT