From: Gordon Worley (redbird@rbisland.cx)
Date: Sat Oct 19 2002 - 07:32:50 MDT
On Saturday, October 19, 2002, at 02:34 AM, Mitch Howe wrote:
> The simplest concept of Friendliness, and the one that leads to the
> least
> confusion, is, in my opinion, volition-based Friendliness
I think that you're thinking about Friendliness the wrong way.
Friendliness allows an FAI the ability to find morality. And not just
any morality, but the correct morality. And if there is no correct
morality, then it will figure that out, too. Most of your post
discusses volition-based morality, not volition-based Friendliness (I'm
not even clear on what volition-based Friendliness is supposed to
be--your own version of Friendliness?).
Volition-based morality is a theory that a human named Eliezer S.
Yudkowsky has proposed as being closer to correctness than, say,
Christian morality. Humans, though, have too limited a mental capacity
to check the correctness of a moral theory, so an FAI's initial moral
theory (if one is given to the AI) is irrelevant because all known
theories are equally likely to be wrong.
There is nothing wrong with discussing morality; you need some system
by which to decide what is right and wrong. Otherwise you think that
everything is right or, less often seen, everything is wrong.
Discussing the morality of an SIFAI is silliness, though. Beyond the
level of human level AI, all you can do is make guesses. Some guesses
may turn out to be better than others, but they're still all just
guesses.
-- Gordon Worley "Man will become better when http://www.rbisland.cx/ you show him what he is like." redbird@rbisland.cx --Anton Chekhov PGP: 0xBBD3B003
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT