From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Jun 30 2002 - 10:36:10 MDT
Ben Goertzel wrote:
>> I view it as an unbearably horrifying possibility that the next
>> billion years of humanity's existence may be substantially
>> different depending on whether the first AGI was raised by an
>> environmentalist. It's equally horrifying whether you're an
>> environmentalist looking at Eliezer or vice versa. It shouldn't
>> depend on who happens to build the AI, it should depend on *who's
>> right*.
>
> You seem to have this idea that there is some kind of
> "meta-rightness" standard by which different ethical standards can be
> judged more or less correct.
>
> There is no such thing.
You seem awfully certain about that. (Heh. I finally got to use that
line.)
Ben, you're a member of a species that has spent the last few hundred
thousand years arguing politics and morality with occasional pauses to
eat and sleep. Is it really that strange if I suggest that there are
complex functional adaptations which influence your choice of
declarative moral beliefs?
This is actually a rather Zen situation; you have meta-ethics but you
don't know you have meta-ethics. I wish now I had some formal training
in Zen methods of getting people to realize that they possess a
subjective experience by just having them *see* it, without a lot of
verbal argument.
Let's start with a moral question. Some people even today, though
thankfully not as many as there a few generations ago, believe that
people of certain races (or at least, what they regard as "races") are
intrinsically worth less than others. You have a different morality
under which race makes no difference to intrinsic worth. Now I'm not
asking you why you believe these other people are wrong, because if so
you'll just answer "Because their morality conflicts with mine"; rather
I'm asking you why you don't share their morality. If morality is
genuinely arbitrary then one mapping of sentiences to intrinsic value is
as good as any other; why is your morality different from theirs?
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT