Learning to be evil

From: Gordon Worley (redbird@rbisland.cx)
Date: Wed Feb 07 2001 - 15:56:23 MST

An assumption that many seemed to be making in the recent discussion
was that there will inevitable be evil AIs, thus the need for
Friendliness and a sysop (or not, depending on whom you ask) to make
sure that the AIs don't do anything evil. I'm wondering if they
would ever become evil in the first place? The consequences of evil
actions always come back on the evil doer, having a negative effect
on them. Humans experience this to, thus the reason that most humans
are not evil, but some just lack the intelligence to realize that
their actions are evil and will harm them, either directly or
indirectly. Any SI should be smart emough to realive the
consequences of vis actions and make the decision the will be of the
most benefit. To that extent, Friendliness seems to me like an
inherent trait in SIs, since unlike humans they will be smart enough
to consider all of the consequences. Then the only concern is viral
memes that might cause an SI not to consider all of the consequences
of an action fully before choosing the most beneficial one (in net
terms, of course). And, actually, such a meme would have to find a
way to get around the existing consequence consideration system,
since adopting the meme will not be the most beneficial.

Maybe I'm failing to understand what Friendliness is, but my
understanding leads me to this conclusion.

Gordon Worley
PGP:  C462 FA84 B811 3501 9010  20D2 6EF3 77F7 BBD3 B003

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT