From: Eliezer S. Yudkowsky (email@example.com)
Date: Sat Jul 23 2005 - 12:55:49 MDT
Russell Wallace wrote:
> CV throws vast resources of intelligence - information-processing ability -
> behind moral axioms evolved for survival on the plains of Africa, and then
> - this is the problem - proceeds as though with unlimited power comes
> unlimited moral authority.
I'm not sure what you mean by "moral axioms". Human goal systems don't
decompose cleanly and orthogonally into moral axioms + everything else. If
they did, my life would be a lot simpler.
In CV - which, by the way, I really should have called "Collective
Extrapolated Volition" - I called for defining a family of enhancements
applicable to abstractions of human minds and human society, such that the
extrapolation of abstract interacting enhanced humans could get far enough to
return a legitimate answer to the question, "What sort of AI would we want if
we were smarter?" This is one question. It could have more than one answer,
depending on how you define the extrapolation process. But if you do multiple
extrapolations, you have to define some way for the extrapolations to
interact, and you can't execute a single rewrite on the interaction framework,
meaning the basic level is now permanently hardcoded at the programmers' level
of intelligence and wisdom, which is a bad bad bad thing. So the Collective
Extrapolated Volition returns one answer to one question, one AI rewrite to
start off the next round. It doesn't mean that the one answer is "We want an
AI that will mess with our individual destinies according to a uniform set of
averaged-out moral rules!"
Look, from the outside - to anyone who's not on the SIAI programming team -
what the programmers are doing (forget about how they do it) is supposed to be
intuitively simple. The programmers create an enormously powerful question
mark whose question is "What AI do we want to happen next?" I frankly do not
understand exactly where you think an error inevitably occurs in this
framework. Are you afraid of getting what you want? Are you afraid that most
other people want something different (if so, why should SIAI listen to you,
not them?) Or are you worried that building a Collective Extrapolated
Volition as the fleshed-out, real-world implementation of the question mark
inherently defines 'wanting' in some sense other than the intuitive, the sense
in which you don't 'want' the future to be a giant ball of worms or whatever?
You've got to mean one of those three and it's not clear which.
> In reality, a glut of intelligence/power
> combined with confinement - a high ratio of force to space - triggers the
> K-strategist elements of said axiom system, applying selective pressure in
> favor of memes corresponding to the moral concept of "evil". (Consider the
> trend in ratio of lawyers to engineers in the population over the last
> century for an infinitesimal foreshadowing.)
Dude, what the *heck* are you talking about?
> In pursuit of a guiding light that isn't there, the RPOP would extrapolate
> the interaction between K-strategist genes and parasite memes and force the
> result, with utter indifference to the consequences, on the entire human
> race. There will be no goal system for any element of the Collective but
> power - not clean power over the material world (which will effectively
> have ceased to exist except as the RPOP's substrate) but power always and
> only over others - a regime of the most absolute, perfectly distilled evil
> ever contemplated by the human mind. (Watch an old movie called "Zardoz"
> for a little more foreshadowing.)
Is this what you think would inevitably happen if, starting with present human
society, the average IQ began climbing by three points per year? At what
point - which decade, say - do you think humans would be so intelligent, know
so much and think so quickly, that their society would turn utterly evil? Or
if this is not what you think would happen, why would the AI mistakenly
extrapolate that such a society would turn utterly evil? I don't understand
what you conceive to be the chain of cause and effect.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT