Re: Geddes's 'Moral Perturbation Theory'

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Fri Jun 18 2004 - 00:36:58 MDT


Marc Geddes wrote:
>
> You must admit that human individual characteristics
> simply would not have equal weight when factored into
> the CV. Isn't it reasonable to conclude that the
> characteristics of a mass murderer is not likely to
> have as much weight as the characteristics of Mother
> Teresa in the CV? And the more long-term the
> extrapolation the less weight the characteristics of
> the stupid, evil people will have?

One human, one vote; so I intend to write the dynamic.

If you guess that stupid, evil people will have less influence on those
around them, or, hey, become less stupid and evil, as humanity grows - I
certainly hope so. Otherwise I'd ask for the moral justification why not,
with one finger on the off switch, were I the Last Judge.

> So it seems to me that if we restrict our attention to
> the best, brightest, most altruistic etc , the
> characteristics of these people are likely to 'ballon
> out' in importance relative to the characteristics of
> the rest of humanity as we extrapolate forward.

More likely, you'll get something that very vaguely and distantly resembles
an interim stage humanity goes through at some point, considered as
individuals. Our collective volition might never be that stupid.

> I never said we exclude everyone else, I just said
> that for the purposes of a first approximation of the
> CV it would seem reasonable to restrict our attention
> to the best, brightest, most altruistic humans etc.
> For more accurate approximations we would then expand
> the pool of people to be considered for a random
> sample.

If you mean that asking me for my personal philosophy is likely to get you
a *better* approximation of a CV than asking Britney Spears, then you are
probably right; but *hopefully*, Marc, the CV will be wiser than us *both*.
  It'd be frustrating to do all that work, and then find out that I could
have taken over the world and done as good a job. And boring, if the world
were so dull and prosaic as my wildest imaginings.

The initial dynamic itself runs on (takes a deep breath) one human, one vote.

If an RPOP wanted a good first approximation for a sample, it'd pick a
hundred humans most a-priori likely to be informative about the largest
clusters in the set of final extrapolated volitions, and then extrapolate
out their volitions from that starting point. Contrary to your intuitions,
Marc, this means using 100 ordinary folks. And then you extrapolate those
people knowing more, thinking faster, growing up farther together; which
may or may not arrive at an interim point vaguely reminiscent of Eliezer
Yudkowsky before the extrapolation moves on. Probably not. The
circumstances that forged me are too unusual. The same would hold of those
other geniuses that one might consider. By the time a majority of humanity
zips past the Einstein milestone for raw intelligence, they may have grown
in other ways that would render Einstein a pointless comparison. If you
are someday as bright as Newton I do not think you will become an
alchemist, and I do not think you will linger long at Newton's marker. I
am not a symbol of an extrapolated Indian day laborer who knows more,
thinks faster. I am myself. Just myself. One human, one vote.

Look to the best, brightest, most altruistic humans, and you will find that
they no longer come up with elaborate justifications for why they should be
philosopher-kings. Even if it is cleverly disguised.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT