From: Gordon Worley (redbird@mac.com)
Date: Thu Aug 14 2003 - 17:33:45 MDT
On Thursday, August 14, 2003, at 03:08 PM, king-yin yan wrote:
> I don't have a clear understanding of the big picture yet, but I think
> I've
> spotted a mistake here: "Robust" Friendliness requires
> non-anthropomorphic,
> mathematical / logical precision. Anything less than that would be
> risky.
> However, Friendly to whom? We seem to have already excluded other
> primates from consideration, not to mention animals. Even if that is
> OK,
> the defintion of Friendliness will become more problematic when
> uploading
> becomes available. Are uploads humans? Do copies have separate votes or
> 1 vote? I'm not sure how a formal system of Friendliness is capable of
> dealing with such questions.
Perhaps you're just confused on the terminology, but just in case not:
Friendliness and morality are not the same thing. Friendliness is a
system of metamorality. A Friendly AI will find an `objective'
morality. At least, that's the theory. Friendliness doesn't even
necessarily explicitly care about humans; they are cared about if it
turns out to be moral to care about them. I think most of us would
agree that it's moral to preserve our own human lives (an aspect of
panhuman morality), so I wouldn't be surprised if that was a special
case of the morality of a Friendly AI. Maybe it will be moral to
preserve the lives of cows, I don't know what cows want.
I guess the short of it is that Friendliness != morality, which you
seem to be confusing here.
-- Gordon Worley `When I use a word,' Humpty Dumpty http://www.rbisland.cx/ said, `it means just what I choose redbird@mac.com it to mean--neither more nor less.' PGP: 0xBBD3B003 --Lewis Carroll
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT