From: Gordon Worley (redbird@rbisland.cx)
Date: Wed May 23 2001 - 14:15:44 MDT
At 11:03 AM -0500 5/23/01, Jimmy Wales wrote:
>Gordon Worley wrote:
>> I've fussed about this before, but altruism is the closest word we
>> have.
>
>I don't agree.
>
>"Respecting the wishes of others" can't be an ethical primary,
>as it leads directly to unanswerable questions -- the wishes of
>others will typically be in conflict.
>
>I think that the correct word to use is not 'altruism' but 'benevolence'.
benevolent: 1 a : marked by or disposed to doing good <a benevolent
donor> b : organized for the purpose of doing good <a benevolent
society> 2 : marked by or suggestive of goodwill <benevolent smiles>
altruism: 1 : unselfish regard for or devotion to the welfare of
others 2 : behavior by an animal that is not beneficial to or may be
harmful to itself but that benefits others of its species
Look at the first definition for altruism; this is exactly what is
going on in a Friendly AI. Ve does not necessairly know verself
(i.e. can use 'I' legitametly) but is concerend with respecting
others wishes, including verself because ve doesn't realize the
difference between verself and others. The second definition,
though, just outright doesn't matter, since it's not Friendly to hurt
verself, even though ve doesn't have a sense of self. As I have
expressed earlier, a selfish AI will be Friendly just as a selfless
and altruistic AI will be.
I would really recommend reading section two of FAI; that's really
the part that 'sold' me on Friendliness.
Benevolent, as shown above, does not mean the same thing; it refers
to doing good. This is poor, since what is good tends to be
subjective. I've come to believe in what I refer to as relativistic
morality (morality is objective within a reference frame, be that
frame humanity, general intelligence, or some dumb animal), but
realize that many people develop their own morals to supplant the
objective ones. Stalin may have considered himself benevolent,
killing millions out of his own idea of goodwill, but in the
reference frame of humanity, what he did was evil. Thus, a Friendly
AI could be benevolent, but it would be a bad idea to make it be
benevolent to be Friendly, since that would mean it could think it
was being Friendly by meeting it's own twisted morals. Friendliness
is depent on respecting volition.
Oh, and as to the wishes of others being in conflict, it comes down
to whatever does not hurt others. If you really want something but
it will hurt someone else in the process, a Friendly AI will not
accomidate you. While ve will try to respect anothers volition, ve
will not violate it! If you want to kill someone, a Friendly AI
isn't going to stop you, but ve won't help you, either (well, this is
where Eliezer wants the Sysop to step in and I don't).
So, altruism is used here as best as it can be, benevolence is just
flat out the wrong word. If you can think of a better word (a new
one maybe), I'll be more than happy to use it. :-)
-- Gordon Worley http://www.rbisland.cx/ mailto:redbird@rbisland.cx PGP Fingerprint: C462 FA84 B811 3501 9010 20D2 6EF3 77F7 BBD3 B003
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT