From: Wei Dai (weidai@eskimo.com)
Date: Tue Jul 23 2002 - 14:39:42 MDT
Eliezer S. Yudkowsky wrote:
> Someday, under increasing intelligence, we may have an alternate
> definition of morality in which the external referent is something other
> than this. But until then, the best external referent we have is that
> real morality is what your moral system would be if you were
> superintelligent. Real morality is the limit of your moral philosophy
> as intelligence goes to infinity. This may someday turn out to involve
> reinventing your goal system so that it grounds elsewhere, but the end
> result from our perspective is the same; the pragmatic referent of your
> goal system can be defined as the limit of your moral philosophy as
> intelligence goes to infinity.
Do you assume that the limit does not sensitively depends on your current
moral philosophy, your personal history, or the specific path that your
intelligence increase goes through?
> (Yes, there's a multi-agent version of the definition. I want to get
> the single-agent definition straight first.)
I can't wait. What's the multi-agent definition? Can it somehow average out
the possibly chaotic nature of the evolution of moral philosophy?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT