From: Christian L. (firstname.lastname@example.org)
Date: Sat Oct 19 2002 - 16:09:07 MDT
>Friendliness allows an FAI the ability to find morality. And not just any
>morality, but the correct morality.
How exactly do you go about finding morality? I suspect the answer from you
or Eli would be something like, "We don't know. We're not smart enough yet".
How then do you program something to learn to search for morality when it is
intelligent enough to do so?
>And if there is no correct morality, then it will figure that out, too.
Yes, this is most likely the case. The idea of a "correct" morality is as
wierd as the idea of a "correct" set of rules for a game. First you must
decide what the game should be like, and then create the rules. IMO morality
can be thought of as rules for interaction between humans. If you have
defined a certain society that you want, you might (theoretically at least)
be able to define the morality that would bring about that society (if
people live by it).
So the question then becomes if there is a "correct" society (probably bad
word for post-singularity existence, but WTH). The FAI would in that case
have to "find" this correct society, and then create the morality that comes
with it. And as history shows, everyone has their own take on the correct
There is also an interesting discussion about the *enforcement* of morality,
but it's getting late now...
>There is nothing wrong with discussing morality; you need some system by
>which to decide what is right and wrong.
I think it is a fallacy to believe that this system can be determined
>Discussing the morality of an SIFAI is silliness, though.
Of course. Any discussion of the motivations of an SI is pure speculation at
Unlimited Internet access for only $21.95/month. Try MSN!
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT