From: king-yin yan (firstname.lastname@example.org)
Date: Fri Aug 15 2003 - 15:18:10 MDT
>Perhaps you're just confused on the terminology, but just in case not:
>Friendliness and morality are not the same thing. Friendliness is a
>system of metamorality. A Friendly AI will find an `objective'
>morality. At least, that's the theory. Friendliness doesn't even
>necessarily explicitly care about humans; they are cared about if it
>turns out to be moral to care about them. I think most of us would
>agree that it's moral to preserve our own human lives (an aspect of
>panhuman morality), so I wouldn't be surprised if that was a special
>case of the morality of a Friendly AI. Maybe it will be moral to
>preserve the lives of cows, I don't know what cows want.
>I guess the short of it is that Friendliness != morality, which you
>seem to be confusing here.
Friendliness does not specify the details of morality, but the FAI
is supposed to deal with them. Therefore Friendliness actually
indirectly determines morality. Otherwise the FAI will not interfere
with moral issues. Come to think of it, that seems to be a better
AI -- as a tool to solve cognitive problems, but no more than that.
Get advanced SPAM filtering on Webmail or POP Mail ... Get Lycos Mail!
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT