From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Feb 07 2001 - 21:09:16 MST
Gordon Worley wrote:
>
> An assumption that many seemed to be making in the recent discussion
> was that there will inevitable be evil AIs, thus the need for
> Friendliness and a sysop (or not, depending on whom you ask) to make
> sure that the AIs don't do anything evil. I'm wondering if they
> would ever become evil in the first place?
It's a free Universe. Everyone has the right to design evil AIs, unless
that is found to constitute child abuse or something. Certainly people
will have the right to become extremely powerful and intelligent and,
unless objective morality steps in, extremely evil, so yes, there may well
be ultrapowered incredibly evil entities in the Universe, and, thanks to
the Sysop paradigm, Amishfolk on Earth going *thbbpt* at them.
*Unless* altruism is a convergent subgoal powerful enough to suck in any
sufficiently intelligent entity, regardless of how programmed, which is a
pleasant and significant possibility but *not* one that I'm currently
relying on.
> The consequences of evil
> actions always come back on the evil doer, having a negative effect
> on them.
A romantic and rather impractical view. Sometimes the consequences of
evil come back on the evildoer, sometimes they don't. Highly competent
evildoers have gone on to die in bed, surrounded by many loving, newly
wealthy great-grandchildren, and somewhere along the line, you've got
their genes.
> To that extent, Friendliness seems to me like an
> inherent trait in SIs, since unlike humans they will be smart enough
> to consider all of the consequences.
And if the SI is the only one around, and powerful enough that there are
no consequences?
Anyhoo, Friendliness isn't intended to suppress evil impulses. AIs don't
have 'em unless you put them there. Correspondingly, although other
possibilities exist, the default, engineering-*conservative* assumption is
that goodness also doesn't materialize in source code.
Friendliness is a means whereby a genuinely, willingly altruistic
programming team transmits genuine, willing altruism to an AI, packaged to
avoid damage in transmission, and infused in such form as to eventually
become independent of the original programmers.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT