From: Eliezer S. Yudkowsky (email@example.com)
Date: Tue Jul 16 2002 - 13:41:27 MDT
Eliezer S. Yudkowsky wrote:
> Ben Goertzel wrote:
>>> If *human* intelligent systems, but not necessarily all theoretically
>>> possible minds-in-general, tend toward certain moral systems as opposed
>>> to others, then would you deem it desirable to construct an AI such that
>>> it shared with humans the property of tending toward these certain moral
>>> systems as intelligence increased?
>> That is a tough nut of a question, Eliezer. I have thought about it
>> before and it's troublesome.
>> What is your view?
> That this is the entire question of Friendly AI and the definition of
Actually, let me rephrase: This is the pragmatic definition of
*morality*. This is the external referent to which our own moral
systems should be regarded as successive approximations, just as our
belief systems should be regarded as successive approximations to
reality. Hence it is the point, the whole point, and nothing but the point.
Someday, under increasing intelligence, we may have an alternate
definition of morality in which the external referent is something other
than this. But until then, the best external referent we have is that
real morality is what your moral system would be if you were
superintelligent. Real morality is the limit of your moral philosophy
as intelligence goes to infinity. This may someday turn out to involve
reinventing your goal system so that it grounds elsewhere, but the end
result from our perspective is the same; the pragmatic referent of your
goal system can be defined as the limit of your moral philosophy as
intelligence goes to infinity.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence (Yes, there's a multi-agent version of the definition. I want to get the single-agent definition straight first.)
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT