Re: FAI means no programmer-sensitive AI morality

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Jul 16 2002 - 13:10:16 MDT


Ben Goertzel wrote:
>
>>But you do, personally, have criteria of merit which you use to actually
>>choose between moralities? A desirability metric is a way of choosing
>>between futures. Do you have a "criterion of merit" that lets you
>>choose between desirability metrics? What is it?
>
> I'm not sure I get your question.
>
> Y= "Ben's morality"
> X = another moral system being judged

Has Y changed at any time in, say, the last 10 years? If so, by what
criteria?

Also, you've stated in the past that you value your own life and the
life of your family more highly than typical human lives. Do you
believe it would be appropriate to pass on this asymmetry to the goal
system X of an AI created by you - assuming, as you specify, that the
actual result of doing so was to satisfy the corresponding asymmetry in
your goal system Y?

>>If *human* intelligent systems, but not necessarily all theoretically
>>possible minds-in-general, tend toward certain moral systems as opposed
>>to others, then would you deem it desirable to construct an AI such that
>>it shared with humans the property of tending toward these certain moral
>>systems as intelligence increased?
>
> That is a tough nut of a question, Eliezer. I have thought about it before
> and it's troublesome.
>
> What is your view?

That this is the entire question of Friendly AI and the definition of
Friendliness.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT