RE: FAI means no programmer-sensitive AI morality

From: Rafal Smigrodzki (rms2g@virginia.edu)
Date: Tue Jul 16 2002 - 15:38:06 MDT


Eliezer S. Yudkowsky wrote:

Real morality is the limit of your moral philosophy
as intelligence goes to infinity. This may someday turn out to involve
reinventing your goal system so that it grounds elsewhere, but the end
result from our perspective is the same; the pragmatic referent of your
goal system can be defined as the limit of your moral philosophy as
intelligence goes to infinity.

### Intelligence going to infinity would be equivalent to the ability to
find the best possible solution to any arbitrary problem using any
desirability metric, including cases of infinitely recursive modification of
goal-systems. I presume we would benefit from the input of experts on
eschatology here.

For my part, I would think that as intelligence goes to infinity, the
branching pattern of possible goal-systems might be infinite, too, rather
than convergent. After all, while the goal-systems of amoebae are pretty
uniform, we in our advanced wisdom strive for much more in so many different
ways.

Why should any sentient procure an SAI *not* grounded in vis own
programmer-sensitive morality? I guess there might be goal-systems
encompassing a yearning for a non-sequitur. FAI is programmer-sensitive,
even if only by the six-degree-of-separation expedient of panhuman (or even
pansentient) motivation analysis.

Rafal



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT