Re: Terms of debate for Complex Systems Issues

From: Phil Goetz (philgoetz@yahoo.com)
Date: Wed Aug 24 2005 - 13:24:56 MDT


--- Richard Loosemore <rpwl@lightlink.com> wrote:

> I have to point out that you are looking at
> characteristics of systems that are all a great deal too simple to be
> relevant to the original point.

The field of "complex systems" studies systems that are,
typically, extremely simple, in that they can be described
in a few sentences. Sandpiles, the game of Life, etc.
What makes the outcome complex is not the complexity of
the parts or the rules, but the multiplicity of non-linear
interactions.

> This means: it builds a representation of what is going on inside
> itself. And as part of its "thinking" it may be curious about what
> happens if it reaches into its own programming and makes alterations
> to
> its goal system on the fly. (Is there anything in your formalism
that
> says it cannot or would not do this? Is it not free, within the
> constraints of the goal system, to engage in speculation about
> possibilities? To be a good learner, it would surely imagine such
> eventualities.)

Sometimes we design systems to have this capability.
It's a difficult design problem; you have to go out of your way to
design a computer program that has access to its own programming.
This is rarely done completely, except in Prolog programs.
LISP programs often have this property to some extent, but
there is always an outer main loop that the program doesn't
have access to. Unless you go to extremes (which some people
have done) to create a program that can replicate its own
source code. Those things are freaks of nature, and you
absolutely wouldn't take a big complicated program and
re-design it to have that property.

Note that on modern CPUs, with code generated by compilers,
a program-instruction (code) block typically cannot execute
instructions to read data values from its own block, or from
another code block. Data is in data blocks.

> It also models the implications of making such changes. Let us
suppose,
> *just for the sake of argument*, that it notices that some of its
goals
> have subtle implications for the state of the world in the future
[ snip ]

I understand what you're saying. I disagree.
Somewhere in the agent, there is a piece of "code", whether it's
implemented as a neural network, or as a series of x86 instructions,
or whatever, that controls a choice mechanism that ranks or
otherwise selects actions, goals, and so on. Even while the program
is examining that very code, and deciding what to do about it,
it is doing so under the control of that code.

Supposing that the AI designer had gone to the extra effort to
allow the program to read and modify that particular piece of
code, the AI might choose to modify that code. But that choice
itself would still be controlled by that code.

The AI can't make a choice that isn't controlled by its
pre-existing choice mechanism, which is governed by a set
of wired-in preferences or values or goals or whatever you
choose to call them. It will act in a way meant to satisfy
those preferences. It may act in error, accidentally
redirecting its goal system. It might also be that the
motivational system has been programmed in a buggy way that
leads to conclusions the designers didn't intend, in the same
way that, for example, deep study of Buddhist doctrine could
lead you to conclude that destroying the Universe would be a
virtuous act, yet Buddha himself would not have approved
of that conclusion.

Digression: Buddhism operates under an exemplar-based rather
than a logic-based system. A disciple is not meant to deduce
all the rules and compute their logical closure; a disciple is
meant, rather, to study a large set of cases, and apply them
to other situations, using similarity judgements rather than logic.
This might be a safer way to design AI motivational systems.

> Now, demonstrate in some formal way that the goal system's structure,
> when the AGI has finished this little thought episode, is a
> predictable consequence of the current goal system.

Hard to know what you mean by "predictable". You can't mean
"predictable" in the ordinary sense, since no one here would argue
that the goal system's structure is predictable. Hopefully
you don't mean "deterministic", e.g., that a sufficiently
complex deterministic system is no longer deterministic.
But I can't figure out what else you might be trying to say.

> Demonstrate that the goal
> system cannot go into an arbitrary state in a few minutes.

Likewise, if "arbitrary" mean "unpredictable", or if it means
"nondeterministic", then this statement is unhelpful. But I
don't know what else it could mean.

- Phil Goetz

                
____________________________________________________
Start your day with Yahoo! - make it your home page
http://www.yahoo.com/r/hs
 



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT