RE: supergoal stability

From: Ben Goertzel (ben@goertzel.org)
Date: Fri May 03 2002 - 18:19:08 MDT


Hi,

> Is a complex nonlinear dynamical system really the right way to look at a
> Friendly AI? This is an intelligent being we're talking about, capable of
> making choices.

I do think that minds are *necessarily* complex nonlinear dynamical systems.
This is an intuition, I certainly haven't proved it.

As for "capable of making choices", this comment opens up a huge
philosophical debate. As you know the neurophysiological evidence
(Gazzaniga etc.) is that most of our "felt conscious choices" are actually
determined by unconscious processes. List members who don't know this work
should look up Gazzaniga's split-brain experiments, which are terribly
convincing in this regard.

So I think what we call "choice" is a pretty complex nonlinear dynamical
phenomenon itself. Nietzsche said, "Consciousness is like the general who
takes responsibility for the spontaneous actions of his troops." This is a
lot though not all of the story. Consciousness also, by fallaciously taking
responsibility for decisions it did not in any real sense make, provides a
valuable role of clarifying and crystallizing the nature of decisions that
the unconscious has already made. I think Nietzsche knew this but did not
emphasize it. The general, when he takes credit for what the troops did,
explains clearly what the troops did, in a way that may help the troops to
spontaneously and self-organizingly do even better things next time.

> The Singularity is not a complex nonlinear dynamical
> process - it is alive

I don't see how you can say the Singularity is "alive." How are you
defining life? Normally it is defined in terms of metabolism and
reproduction.

> there is an intelligence, in fact a transhuman
> intelligence, standing behind it and making choices.

The half-illusion of "choice" is, in my view, a complex nonlinear dynamical
process itself

>You can't create
> Friendly AI by blindly expecting it to be intelligent and alive;

Well, this is kind of obvious, and I'm certainly not taking that sort of
approach. No one is, actually. This is a bit of a "straw man" type
argument, I'm afraid.

> If, when you've created Friendly AI
> through your design understanding, someone with a surface understanding
> looks at the system and says: "Oh, look, a self-modifying goal
> system that
> follows a complex nonlinear dynamic," instead of "Oh, look, a mind that
> understands philosophy and is trying to improve itself," then
> you've screwed
> up the job completely.

I think those two perspectives are complementary and not contradictory.

I can look at YOU, Eliezer, and make both those statements honestly and
without contradiction

> If you see the process of a mind improving itself as randomly
> drifting, then
> you won't be able to create Friendly AI because you won't be
> looking at the
> forces that make it more than random drift.

Again this straw man argument. No, I don't see the process of a mind
improving itself as randomly drifting. No one does, probably.

Clearly "complex nonlinear dynamic" is not equal to "random drift"

> "a Friendly AI reasoning
> about morality is A NONLINEAR DYNAMIC SYSTEM". But to actually *build*
> Friendly AI, the only appropriate and useful metaphor is "A Friendly AI
> reasoning about morality is A FRIENDLY AI REASONING ABOUT MORALITY."

Eliezer, "a Friendly AI reasoning about morality is A NONLINEAR DYNAMIC
SYSTEM" is not intended by me as a METAPHOR.

It is actually a precise mathematical statement, using commonly defined
mathematical terms.

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT