Re: supergoal stability

From: Eliezer S. Yudkowsky (
Date: Fri May 03 2002 - 18:56:23 MDT

Ben Goertzel wrote:
> > The Singularity is not a complex nonlinear dynamical
> > process - it is alive
> I don't see how you can say the Singularity is "alive." How are you
> defining life? Normally it is defined in terms of metabolism and
> reproduction.

"Life is anything designed primarily by evolution, plus all beings above a
certain level of intelligence." This is my old definition, which I still
like today because it manages to include cows, viruses, mules, humans, and

> > You can't create
> > Friendly AI by blindly expecting it to be intelligent and alive;
> Well, this is kind of obvious, and I'm certainly not taking that sort of
> approach. No one is, actually. This is a bit of a "straw man" type
> argument, I'm afraid.

I'm not accusing you of having done this - I wasn't accusing anyone, in
fact, but rather talking about the reasons why *I* usually don't invoke the
"intelligent and alive" argument. It's true, but not a good thing for
Friendly AI builders to dwell on because it's the destination rather than
the path.

> Eliezer, "a Friendly AI reasoning about morality is A NONLINEAR DYNAMIC
> SYSTEM" is not intended by me as a METAPHOR.
> It is actually a precise mathematical statement, using commonly defined
> mathematical terms.

But you were using it as a metaphor rather than as a mathematical
statement. You pointed out the butterfly effect in weather systems, which
is a behavior for one kind of nonlinear dynamic system, and drew a
correspondence to a Friendly AI modifying its own goals. In other words,
you cited a prototypical example of a nonlinear dynamic system and argued
that a Friendly AI, which is a *very special* kind of nonlinear dynamic
system, will inherit the stereotypical qualities attributed to prototypical
examples of nonlinear dynamic systems: UNPREDICTABILITY, UNCONTROLLABILITY,
NONINTENTIONALITY. That's a metaphor if ever I saw one. Humans are
nonlinear dynamic systems, mathematically speaking, but an extremely
noncentral case of nonlinear dynamic systems, one in which it is useful to
attribute intentionality to the system's final state because the system
*does in fact* have intentions.

-- -- -- -- --
Eliezer S. Yudkowsky
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT