Re: FAI means no programmer-sensitive AI morality

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Jun 30 2002 - 13:09:44 MDT


Ben Goertzel wrote:
>>
>>Let's start with a moral question. Some people even today, though
>>thankfully not as many as there a few generations ago, believe that
>>people of certain races (or at least, what they regard as "races") are
>>intrinsically worth less than others. You have a different morality
>>under which race makes no difference to intrinsic worth. Now I'm not
>>asking you why you believe these other people are wrong, because if so
>>you'll just answer "Because their morality conflicts with mine"; rather
>>I'm asking you why you don't share their morality. If morality is
>>genuinely arbitrary then one mapping of sentiences to intrinsic value is
>>as good as any other; why is your morality different from theirs?
>
> Being "worth less" is ambiguous...
>
> If the argument is that blacks are extremely less intelligent than whites
> (an argument made in the past by many people), this is an empirically
> testable statement, and has been refuted [yes, I know there is a slight IQ
> difference among races, with whites getting higher than blacks and orientals
> getting higher than whites -- but this is not the sort of thing I'm tallking
> about...]

Why do you suppose that people phrased their argument as "race X is less
intelligent than race Y and therefore worth less", rather than "under my
choice of arbitrary morals, I choose to assign less value to race X than
race Y"?

> If the argument is that blacks don't have souls, then I guess it's outside
> the domain of experiment and logic...

(Parenthetically: Not really. You'd just ask "Why do you think people
have souls?" and then ask whether this sounds like a reason that would
apply equally to race X and race Y.)

But why do you suppose that the *argument* would be that race X is
soulless? Why argue about morality? Why not just say, "race X maps to
desirability 0 and race Y maps to desirability 1"?

> I'm not sure I would agree that "morality is genuinely arbitrary." I would
> say that there is no objective scientific or logical way to judge one moral
> system versus another. Because any system of judging presumes some
> "criterion of merit"; the choice of criterion of merit will then determine
> which moral system is better.... of course, one can then ask which criterion
> of merit is better

But you do, personally, have criteria of merit which you use to actually
choose between moralities? A desirability metric is a way of choosing
between futures. Do you have a "criterion of merit" that lets you
choose between desirability metrics? What is it?

> However, it could nonetheless be the case that highly intelligent systems
> tend toward certain moral systems, as opposed to others. Just as modern
> technological culture tends toward different moral systems than tribal
> culture....

If *human* intelligent systems, but not necessarily all theoretically
possible minds-in-general, tend toward certain moral systems as opposed
to others, then would you deem it desirable to construct an AI such that
it shared with humans the property of tending toward these certain moral
systems as intelligence increased?

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT