From: Phil Goetz (philgoetz@yahoo.com)
Date: Tue Aug 23 2005 - 16:38:19 MDT
--- Richard Loosemore <rpwl@lightlink.com> wrote:
> ME) Okay, I'll try again. There is a very general argument, from
> Complex Systems theory, that says that if something of the complexity
> of an AGI has a goal system, and a thinking system that is capable of
> building and making use of sophisticated representations of (among
> other things) the structure and behavior of its own goal system, then
it
> would be extraordinarily unlikely if that AGI's behavior was
> straightforwardly
> determined by the goal system itself, because the feedback loop
> between
> goal system and thinking system would be so sensitive to other
> influences that it would bring pretty much the entire rest of the
> universe into the equation. The overall behavior, in other words,
> would be a Complex (capital C) conjunction of goal system and
> representational
> system, and it would be meaningless to assert that it would still be
> equivalent to a modified or augmented form of the original goal
> system.
> For that reason we need to be very careful when we try to draw
> conclusions about how the AGI would behave.
>
> SOMEONE ELSE) You still haven't given any arguments to support your
> contention.
If you are saying that if you take a system that is designed to
fulfill goals, and you give it a rich environment, lots of rules,
and lots of other knowledge, that its code will somehow magically
run in ways that one would predict to be theoretically impossible
from the code, then you are wrong. I hope that's not what you're
saying. You're not saying, for instance, that a sufficiently
complex computer can solve the halting problem? Or that a very
complex program running in protected mode can, by virtue of its
complexity (rather than by virtue of a very specific exploit),
operate in real mode? Or that a program whose main loop guarantees
that each operator is chosen in a way that will minimize some
value, will actually act so as to maximize that value? Or that
invariants in the code will vary?
If you're saying that it will be hard to predict what this complex
system will do, that's not a contention, because no one will contend
with that statement.
I suppose you're trying to assert something between these extremes,
but I can't figure out what.
I think you need to be more specific about
what you mean by "complex systems theory". There isn't a
"complex systems theory" in the way that there is an "evolutionary
theory"; there is a set of principles and a set of tools,
and a whole heap of data and anecdotes.
CST might say things such as
- a plot of the number of goals of the system vs. the importance of
those goals would show a power-law distribution
- there is some critical number of average possible action transitions
above which the behavior of the system leads to an expansion rather
than a contraction in state space
- there is a ratio of exploration of new hypotheses over exploitation
of confirmed hypotheses, and there are two values for this ratio that
locate phase shifts between "static", "dynamic", and
"unstable/devolving" modes of operation
There are no general principles of complex systems that say that
a system "emerges" into a new state in which the laws from the
previous state no longer apply. Evolution and emergent complexity
don't violate the laws of thermodynamics. Likewise, a computation,
no matter how complex the computer, can't violate the laws of
computation.
In friendly and respectful debate,
Phil Goetz
____________________________________________________
Start your day with Yahoo! - make it your home page
http://www.yahoo.com/r/hs
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT