Re: The Relevance of Complex Systems

From: Michael Wilson (
Date: Wed Sep 07 2005 - 20:28:54 MDT

Richard Loosemore wrote:
> An interesting subset (those sometimes referred to as being "on the edge
> of chaos") can show very ordered behavior. These are Complex Systems.
> Capital "C", notice, to distinguish them from "complex" in the sense of
> merely complicated.

The fact that the world contains a lot of systems with high-level
predictive regularities, which are probabilistic rather than
deterministic for any practical analysis, is hardly news to anyone.

> What is interesting about these is that they often show global
> regularities that do not appear to be derivable (using any form of
> analytic mathematics) from the local rules that govern the unit
> behaviors.

However, the large number of computer simulations you mention have
quite clearly demonstrated their susceptability to monte-carlo
simulation and induction of probabilistic predictive rules (which
are present any time you see 'order in the chaos') from the results
of such simulations. Frankly whether analytic methods work or not
is irrelevant to the question of whether you have to use 'emergence'
and abandon all hope of understanding your AGI; an AGI does not have
to be an causally tangled system itself to be able to model and
understand complex systems in the world.

> In other words, you can build alll sorts of systems with enormously
> different local rules and global architectures, and the same patterns
> of global behavior seem to crop up time and again.
> What to conclude from this?

Well, let's see. The computer I am typing on has several hundred million
transistors in it. Several hundred million electronic components, each
capable of simple computation. But the rules the rules that govern
their interaction aren't even nice and simple; they're fiendishly
complex, with millions of overlaid patterns and context-dependent /local/
behaviours. Surely the global behavior of such a system will be totally
and utterly incomprehensible! It must be many orders of magnitude beyond
the difficult of predicting what will happen when say a mere million
game of life cells are run for a few billion generations!

In practice of course the computer satisfies a huge number of very
specific high level constraints with near perfect reliability. Why?
Because it was designed by deliberative intelligences that chose
causal mechanisms that would mesh in ways that maintain various
high-level constraints while manipulating hidden, internal degrees of
freedom to improve performance. This is how engineering works, and
it's something that nature has a devil of a time doing due to the
complete lack of planning and refactoring capability in natural

As you say, most large-scale iterative systems picked at random
are either chaotic or monotonous. The natural processes that
produced and sustain humans, including the humain brain, are indeed
'Complex systems' by this definition. Natural selection had the
brute force to do massive tial and error search for rule sets that
worked, and due to its complete dependence on incremental paths
was doomed to repeat this at one level of organisation after

> Namely: if you look at Mathematics as a whole you can see that
> the space of soluble, analytic, tractable problems is and always
> has been a pitiably small corner of the space of all possible systems.

Yet mathematics has been incredibly useful in understanding how the
universe works and building useful artefacts, because nature keeps
throwing up patterns again and again that can be closely approximated
with maths. The fact that maths can't tractably work out those
patterns from first principles is irrelevant. We are not attempting to
extrapolate the entire structure of the universe from a piece of fairy
cake, we are merely attempting to replicate learning (which means the
Bayesian superset of the scientific method for rationalists).

> The default assumption made by some people is that Mathematics as a
> domain of inquiry is gradually pushing back the frontiers and that
> in an infinite universe there may come a time when all possible
> problems (equations/systems) become tractable (i.e. analytically
> solvable) BUT there is a substantial body of thought, especially
> post-Godel, that believes that those systems are not just difficult
> to solve, but actually impossible.

To make this relevant you're going to have to state why intelligence
can only be implemented by a system which is utterly unable to
provably follow high-level constraints. You've already made one
stab at explaining why AGIs must be Compex, which was simply wrong.
Feel free to try again, because without such a proof all this
glorification of the strict intractability of various arbitrary
systems is irrelevant. Why is it that this particular class of
engineered artefact can't benefit from the controlled, selective
determinism that we have been able to put into all of our other IT
systems? Why should nature have the only answers here, when we are
steadily overtaking her everywhere else?
> When people try to cook up formalisms that are supposed to be the core
> of an intelligence, they often refer to systems of interacting parts in
> which they (the designers) think they know (a) what the parts look like
> and (b) how the parts interact and (c) what the system architecture and
> environmental input/output connection amounts to. A CAS person looks at
> these systems and says "Wait, that's a recipe for Complexity".

If we could somehow import a 'CAS person' from a world where there were
no digital computers, I'm sure they'd declare the entire idea ridiculous,
and claim that an economy based on billions of lines of crystaline,
deterministic, and hideously complex code was utterly impossible.

Some, possibly most, past attempts at AGI fit your bill. Certainly all
the connectionist ones do. Given that none of these attempts have
actually worked I don't think that says much about whether working
AGI designs can have high-level predictability or not.

> And what they mean is that the designer may *think* that a system can
> be built with (e.g.) bayesian local rules etc., but until they actually
> build a complete working version that grows up whilst interacting with
> a real environment, it is by no means certain that what they will get
> globally is what they thought they were going to get when they invented
> the local aspects of the design.

In engineering, the ability to predict in advance how a device will
behave increases with competence, the overall level of knowledge in
the field and the care taken in the design process. I would say that
applies very well to AGI. Things apparently work in reverse under
your development paradigm.

> In practice, it just never works that way. The connection between
> local and global is not usually very simple.

Not if you don't know how to design causally clean architectures
and stable goal systems, then no it doesn't.
> So you may find that if a few well-structured pieces of knowledge are
> set up in the AGI system by the programmer, the Bayesian-inspired local
> mechanism can allow the system to hustle along quite comfortably for a
> while .... until it gradually seizes up.

Rational systems don't 'sieze up'. They may reach a fitness plateau,
but a competent design will not get itself into states where no
useful work is being done. I know perfectly well the various sorts
of behaviour you're alluding to, I used to work on systems like that
too, including trying to fix them by patching and poking and
generally carrying out trial and error without being able to reliably
trace the causes of the problem. If you really want to stick with
that because you feel that mysterious problems can only be solved by
mysterious solutions, then so be it.

> (This is a more general version of what was previously called the
> Grounding Problem, of course).

No, it's a rather strange analogy for (I think) the referant drift
that irrational systems can experience.
> *So this is the lesson that the CAS folks are trying to bring to the
> table.* (1) They know that most of the time when someone puts together
> a real system of interacting, adaptive units, there can be global
> regularities that are not identical to the local mechanisms. (2) They
> see AGI people coming up with proposals regarding the mechanisms of
> thought, but those ideas are inspired by certain aspects of what the
> high-level behavior *ought* to be (e.g. Bayesian reasoning), and the AGI
> people often talk as if it is obvious that these are also the underlying
> local mechanisms...... but this jump from local to global is simply not
> warranted!

I'll grant that it's not obvious. In humans, Bayesian reasoning is a
very high level behaviour. Making it the starting place for intelligence
is a radical step, but a very well supported one.

> These points are so crucial to the issues being discussed on this list,
> that at the very least they need to be taken seriously, rather than
> dismissed out of hand by people who are unbelievably scornful of the
> Complex Systems community.

The dismissal comes from the fact that the CS people keep insisting
that this is essential when no such thing has been established. The
scorn comes from the fact that most of the commentary from that
quarter has been along the lines of 'everything is intractable!
maths is a dead end! all is fuzzy patterns! look, here is one I
made on my computer last night, isn't it pretty?', rather than
descriptions of how specific discoveries from their field can be
used to design specific mechanisms that make a verifiable contribution
to AGI.

> I did not describe the details in my last post, only the general
> approach. Not enough for you to dismiss it as closely matching anything.

I extrapolated from your other posts. Frankly there are so many papers
on potential post-von-Neumann computing paradigms that it's usually a
fair bet to say that any general approach has already been tried. But
that's just a prior, please do demonstrate your orginality by describing
some technical ideas.

> I looked at Flare a few years back. I was not impressed.

Even a technical criticism of Flare might be interesting.

> later discussed my ideas in detail with the head of a government
> agency that was charged with fostering innovation in this arena

Sorry, large chunks of this list have been there, done that. Why
bother with annecdotes about anonymous VIPs when you could just wow
us with a taste of the actual material?

>> This is a phase most people go through at some point in their AI
>> career, a cheap belief that makes it easy to avoid doing hard work.
> So that's why I stopped: I was afraid of hard work. Darn.

While I agree that James was assuming a lot, his point is valid in
the general case. Part of the problem is that state-of-the-art tools
are usually at the difficultly level of being challenging and fun,
not utterly frustrating. You can really get stuck into writing them,
work 16 hour days, churn out lots of code and end up wasting a lot of
time if you don't know precisely how those tools will be used to make
an actual AI system.

> Prior to starting that AI Ph.D. I had worked long and hard on
> Inmos Transputers (do you know what they were? massively parallel
> hardware with a novel parallel programming language integrated in
> the chip design), so my comments about the difficulty level were
> based on real world experience of massively parallel systems.

As it happens, I did a project on transputers when I was at university
(the lecturer for that course was one of the original designers and
seemd nogalistic about them). The technology seemed cool, at least by
the standards of mid-1980s computer science, but it struck me as a
solution looking for a problem. Modern technology can support similar
architectural designs without all the akward sacrafices and
limitations; we have decent support libraries and properly general
purpose computing nodes that make massively parallel clusters
applicable to a nontrivial number of real world problems.

 * Michael Wilson

To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre.

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT