The Relevance of Complex Systems [was: Re: Retrenchment]

From: Richard Loosemore (rpwl@lightlink.com)
Date: Wed Sep 07 2005 - 18:37:02 MDT


Eliezer,

I won't do a line-by-line on your post (copied below).

Instead I want to address the background issue. This is part of the
essay I promised when I wrote the Retrenchment post, explaining in more
detail the parts that were not clear before. There has been much
confusion about what I meant by trying to advocate the relevance of
Complex Systems Theory, but until now I have despaired of trying to give
an exact description of what I meant, short of writing an entire book.

But now I have to say something, because you wrote this:

> "Shut up and learn" is a plaint to which I am, in general, prepared to
> be sympathetic. But you're not the only one with recommendations.
> Give me one example of a technical understanding, one useful for
> making better-than-random guesses about specific observable outcomes,
> which derives from chaos theory. Phil Goetz, making stuff up at
> random, gave decent examples of what I'm looking for. Make a good
> enough case, and I'll put chaos theory on the head of my menu.

You called it "chaos theory".

Chaos theory is not Complex Systems. They are related, but chaos theory
has no relevance to AI. I now have no idea what you thought I was
talking about all along.

So here, as briefly as I can, is what I meant by Complex Systems and
its relevance for AI. And for anyone following the other current thread
about tools for building an AGI, *this* is one of the reasons why all
those attempts to hack an AGI without serious consideration to tool
building are likely to end in tears.

[I really hope that people meet me half way here: I am trying to convey
a lot of stuff very concisely, so I am opening myself up to the danger
of yet another spate of line-by-line misconstruals.]

*Complex Adaptive Systems* (aka "Complex Systems")

If one builds systems that are composed of many (more or less identical)
elements, each of which is relatively simple, but able to do some
moderately interesting amount of computation (with messages being
exchanged between them, some influences coming in from outside and
adaptation going on), then one observes that such systems sometimes
exhibit interesting behaviors, as follows.

Sometimes they evolve in chaotic ways. In fact, *usually* they evolve
in chaotic ways. Not interesting.

Sometimes they head straight toward a latchup state after being switched
on, and stay there. Not chaos, just boring.

An interesting subset (those sometimes referred to as being "on the edge
of chaos") can show very ordered behavior. These are Complex Systems.
Capital "C", notice, to distinguish them from "complex" in the sense of
merely complicated.

What is interesting about these is that they often show global
regularities that do not appear to be derivable (using any form of
analytic mathematics) from the local rules that govern the unit
behaviors. This is what a CAS ("Complex Adaptive Systems") person would
refer to as "emergent" behaviors. More than that, some of these global
regularities appear to be common to many types of CAS. In other words,
you can build alll sorts of systems with enormously different local
rules and global architectures, and the same patterns of global behavior
seem to crop up time and again.

What to conclude from this?

First, that bit about "do not appear to be derivable (using any form of
analytic mathematics)" is something that a lot of people have thought
deeply about .... this is no mere statement of inability, but a profound
realization about what it means to do math. Namely: if you look at
Mathematics as a whole you can see that the space of soluble, analytic,
tractable problems is and always has been a pitiably small corner of the
space of all possible systems. It is trivially easy to write down
equations (or systems, speaking more generally) that are completely
intractable. The default assumption made by some people is that
Mathematics as a domain of inquiry is gradually pushing back the
frontiers and that in an infinite universe there may come a time when
all possible problems (equations/systems) become tractable (i.e.
analytically solvable) BUT there is a substantial body of thought,
especially post-Godel, that believes that those systems are not just
difficult to solve, but actually impossible. When I talk about "the
limitations of mathematics" I mean precisely that point of view.

All that the CAS people did was to come up with some fabulously
interesting types of regularity (the emergent properties of Complex
Adaptive Systems), and then point out that the tractability of the
problem of accounting for these regularities is way, way, way beyond
anything else. They allude to a philosophical/methodological position
in the math community, not to mere "difficulty". Heck, if there are
nonlinear DEs that the math folks declare to be "ridiculously hard and
probably impossible to solve", then what are these Complex Systems,
which are a gazillion times more complex?

Take the regularities observed in one of the most trivial systems that
we can think about, Conway's Life. Can we find a set of equations that
will generate the "regular" forms that emerge in that game? All of the
regular forms, not just some. We should plug in the algorithm that
defines the game, and out the other end should come descriptions of the
glider guns etc. Maybe there are optimists who think this is possible.
  There are many people, I submit, who consider this kind of solution to
be impossible. The function that generates regularities given local
rules, in the Comway system, is *never* going to be found. It does not
exist.

What is the relevance for AI?

When people try to cook up formalisms that are supposed to be the core
of an intelligence, they often refer to systems of interacting parts in
which they (the designers) think they know (a) what the parts look like
and (b) how the parts interact and (c) what the system architecture and
environmental input/output connection amounts to. A CAS person looks at
these systems and says "Wait, that's a recipe for Complexity". And what
they mean is that the designer may *think* that a system can be built
with (e.g.) bayesian local rules etc., but until they actually build a
complete working version that grows up whilst interacting with a real
environment, it is by no means certain that what they will get globally
is what they thought they were going to get when they invented the local
aspects of the design. In practice, it just never works that way. The
connection between local and global is not usually very simple.

So you may find that if a few well-structured pieces of knowledge are
set up in the AGI system by the programmer, the Bayesian-inspired local
mechanism can allow the system to hustle along quite comfortably for a
while .... until it gradually seizes up. To bring in an analogy here,
the Complex Systems person would say this is like trying to tile a
gently curved noneuclidean space .... it looks euclidean on a local
scale, but it would be a mistake to think you can tile it with a
euclidean pattern.

(This is a more general version of what was previously called the
Grounding Problem, of course).

*So this is the lesson that the CAS folks are trying to bring to the
table.* (1) They know that most of the time when someone puts together
a real system of interacting, adaptive units, there can be global
regularities that are not identical to the local mechanisms. (2) They
see AGI people coming up with proposals regarding the mechanisms of
thought, but those ideas are inspired by certain aspects of what the
high-level behavior *ought* to be (e.g. Bayesian reasoning), and the AGI
people often talk as if it is obvious that these are also the underlying
local mechanisms...... but this jump from local to global is simply not
warranted!

I want to conclude by quoting one extract from your message below that
sums up the whole argument:

[Richard Loosemore wrote:]
>> Like Behaviorists and Ptolemaic Astronomers, they mistake a
>> formalism that approximately describes a system for the mechanism
>> that is actually inside the system. They can carry on like this
>> for centuries, adding epicycles onto their models in order to
>> refine them. When Bayesian Inference does not seem to cut it,
>> they assert that *in principle* a sufficiently complex Bayesian
>> Inference system really would be able to cut it ... but they are
>> not able to understand that the "in principle" bit of their argument
>> depends on subtleties that they don't think much about.
>
>
> There are subtleties to real-world intelligence that don't
> appear in standard Bayesian decision theory (he said controversially),
> but Bayesian decision theory can describe a hell of a lot more than
> naive students think. I bet that if you name three subtleties,
> I can describe how Bayes plus expected utility plus Solomonoff
> (= AIXI) would do it given infinite computing power.

You make my point for me. The Ptolemaic astronomers would have used
exactly the same argument that you do: "Name some subtle ways in which
the heavenly bodies do not move according to the standard set of
epicycles, and I can describe how an infinite number of epicycles would
do it...." Yes, yes yes! But they were wrong, because the *real*
mechanism for planetary movement was not actually governed by epicycles,
it was governed by something completely different, and all the Ptolemaic
folks were barking up the wrong tree when though their system was in
principle capable of covering the data.

I have not said exactly how to proceed from here on out (although I do
have many thoughts to share with people about how, given the above
situation, we should really try to do AI), because at the moment all I
am trying to establish is that there is a big, serious problem, coming
in from the Complex Systems community, that says that this Bayesian kind
of approach (along with many others) to building an AGI is based on
faith and wishful thinking.

And a vital corollary to the above arguments about how to build an AGI
is the fact that _absolutely guaranteeing_ a Friendly AI is impossible
the way you are trying to do it. If AGI systems that actually work are
Complex (and all the indications are that they are indeed Complex), then
guarantees are impossible. It's a waste of time to look for absolute
guarantees. (Other indications of Friendliness .... now that's a
different matter).

These points are so crucial to the issues being discussed on this list,
that at the very least they need to be taken seriously, rather than
dismissed out of hand by people who are unbelievably scornful of the
Complex Systems community. That was the reason that I originally sent
the "Retrenchment" post.

If anyone understands what I am saying here, it would be good to hear
from you.

Richard Loosemore

Eliezer S. Yudkowsky wrote:
> Richard Loosemore wrote:
>
>>
>> Ahh! Eric B. Baum. You were impressed by him, huh? So you must be
>> one of those people who were fooled when he tried to explain qualia as
>> a form of mechanism, calling this an answer to the "hard problem" [of
>> consciousness] and making all the people who defined the term "hard
>> problem of consciousness" piss themselves with laughter at his stupidity?
>>
>> Baum really looks pretty impressive if you don't read his actual words
>> too carefully, doesn't he?
>
>
> Sure, I was pleasantly surprised by Baum. Baum had at least one new
> idea and said at least one sensible thing about it, a compliment I'd pay
> also to Jeff Hawkins. I don't expect anyone to get everything right. I
> try to credit people for getting a single thing right, or making
> progress on a problem, as otherwise I'd never be able to look favorably
> on anyone.
>
> Did Baum's explanation of consciousness dissipate the mystery? No, it
> did not; everything Baum said was factually correct, but people confused
> by the hard problem of consciousness would be just as confused after
> hearing Baum's statements. I agree that Baum failed to dissipate the
> apparent mystery of Chalmers's hard problem. Baum said some sensible
> things about Occam's Razor, and introduced me to the notion of VC
> dimension; VC dimension isn't important for itself, but it got me to
> think about Occam's Razor in terms of the range of possibilities a
> hypothesis class can account for, rather than the bits required to
> describe an instance of a hypothesis class.
>
>>> COMMUNITY (H)
>>>
>>> The students of an ancient art devised by Laplace, which is therefore
>>> called Bayesian. Probability theory, decision theory, information
>>> theory, statistics; Kolmogorov and Solomonoff, Jaynes and Shannon.
>>> The masters of this art can describe ignorance more precisely than
>>> most folk can describe their knowledge, and if you don't realize
>>> that's a pragmatically useful mathematics then you aren't in
>>> community (H). These are the people to whom "intelligence" is not a
>>> sacred mystery... not to some of us, anyway.
>>
>>
>> Boy, you are so right there! They don't think of intelligence as a
>> sacred mystery, they think it is so simple, it only involves Bayesian
>> Inference!
>
>
> Not at all. The enlightened use Bayesian inference and expected utility
> maximization to measure the power of an intelligence. Like the
> difference between understanding how to measure aerodynamic lift, and
> knowing how to build an airplane. If you don't know how to measure
> aerodynamic lift, good luck building an airplane. Knowing how to
> measure success isn't enough to succeed, but it sure helps.
>
>> Like Behaviorists and Ptolemaic Astronomers, they mistake a formalism
>> that approximately describes a system for the mechanism that is
>> actually inside the system. They can carry on like this for
>> centuries, adding epicycles onto their models in order to refine
>> them. When Bayesian Inference does not seem to cut it, they assert
>> that *in principle* a sufficiently complex Bayesian Inference system
>> really would be able to cut it ... but they are not able to understand
>> that the "in principle" bit of their argument depends on subtleties
>> that they don't think much about.
>
>
> There are subtleties to real-world intelligence that don't appear in
> standard Bayesian decision theory (he said controversially), but
> Bayesian decision theory can describe a hell of a lot more than naive
> students think. I bet that if you name three subtleties, I can describe
> how Bayes plus expected utility plus Solomonoff (= AIXI) would do it
> given infinite computing power.
>
>> In particular, they don't notice when the mechanism that is supposed
>> to do the mapping between internal symbols and external referents, in
>> their kind of system, turns out to require more intelligence than the
>> reasoning engine itself .... and they usually don't notice this
>> because they write all their programs with programmer-defined
>> symbols/concepts (implictly inserting the intelligence themselves, you
>> see), thus sparing their system the pain of doing the work necessary
>> to ground itself.
>
>
> Historically true. I remember reading Jaynes and snorting mentally to
> myself as Jaynes described a "robot" which came complete with
> preformulated hypotheses. But it's not as if Jaynes was trying to build
> a Friendly AI. I don't expect Jaynes to know that stuff, just to get
> his probability theory right.
>
> I note that the mechanism that maps from internal symbols to external
> referents is readily understandable in Bayesian terms. AIXI can learn
> to walk across a room.
>
> I also note that I have written extensively about this very problem in
> "Levels of Organization in General Intelligence".
>
>> If these people understood what was going on in the other communities,
>> they might understand these issues. Typically, they don't.
>
>
> Again, historically true. Jaynes was a physicist before he was a
> Bayesian, but I've no reason to believe he ever studied, say, visual
> neurology.
>
> So far as I've heard, in modern-day science, individuals are polymaths,
> not communities.
>
> If I despised the communities for that, I couldn't assemble the puzzle
> pieces from each isolated community.
>
>> Here you remind me of John Searle, famous Bete Noir of the AI
>> community, who will probably never understand the "levels" difference
>> between systems that *are* intelligent (e.g. humans) and systems that
>> are collections of interacting intelligences (e.g. human societies)
>> and, jumping up almost but not quite a whole level again, systems that
>> are interacting species of "intelligences" (evolution).
>>
>> This is one of the silliest mistakes that a person interested in AI
>> could make. You know Conway's game of Life? I've got a dinky little
>> simulation here on this machine that will show me a whole zoo of
>> gliders and loaves and glider guns and traffic lights and whatnot.
>> Demanding that an AI person should study "evolutionary biology with
>> math" is about as stupid as demanding that someone interested in the
>> structure of computers should study every last detail of the gliders,
>> loaves, glider guns and traffic lights, etc. in Conway's Life.
>>
>> Level of description fallacy. Searle fell for it when he invented his
>> ridiculous Chinese Room. Why would yo make the same dumb mistake?
>
>
> Ho, ho, ho! Let me look up the appropriate response in my handbook...
> ah, yes, here it is: "You speak from a profound lack of depth in the
> one field where depth of understanding is most important." Evolutionary
> biology with math isn't 'most important', but it sure as hell is important.
>
> To try to understand human intelligence without understanding natural
> selection is hopeless.
>
> To try to understand optimization, you should study more than one kind
> of powerful optimization process. Natural selection is one powerful
> optimization process. Human intelligence is another. Until you have
> studied both, you have no appreciation of how different two optimization
> processes can be. "A barbarian is one who thinks the customs of his
> island and tribe are the laws of nature." To cast off the human island,
> you study evolutionary biology with math.
>
> I know about separate levels of description. I'm not telling you that
> ev-bio+math is how intelligence works. I'm telling you to study
> ev-bio+math anyway, because it will help you understand human
> intelligence and general optimization. After you have studied, you will
> understand why you needed to study.
>
> I should note that despite my skepticism, I'm quite open to the
> possibility that if I study complexity theory, I will afterward slap
> myself on the forehead and say, "I can't believe I tried to do this
> without studying complexity theory." The question I deal with is
> deciding where to spend limited study time - otherwise I'd study it all.
>
>> I did read the documents. I knew about Bayes Theorem already.
>
>
> Good for you. You do realize that Bayesian probability theory
> encompasses a lot more territory than Bayes's Theorem?
>
>> A lot of sound and fury, signifying nothing. You have no real
>> conception of the limitations of mathematics, do you?
>
>
> Yeah, right, a 21st-century human is going to know the "limitations" of
> mathematics. After that, he'll tell me the limitations of science,
> rationality, skepticism, observation, and reason. Because if he doesn't
> see how to do something with mathematics, it can't be done.
>
>> You don't seem to understand that the forces that shape thought and
>> the forces that shape our evaluation of technical theories (each
>> possibly separate forces, though related) might not be governed by
>> your post-hoc bayesian analysis of them. That entire concept of the
>> separation between an approximate description of a process and the
>> mechanisms that actually *is* the process, is completely lost on you.
>
>
> I understand quite well the difference between an approximation and an
> ideal, or the difference between a design goal and a design. I won't
> say it's completely futile to try to do without knowing what you're
> doing, because some technology does get built that way. But my concern
> is Friendly AI, not AI, so I utterly abjured and renounced my old ideas
> of blind exploration. From now on, I said to myself, I understand
> exactly what I'm doing *before* I do it.
>
>> As I said at the outset, this is foolishness. I have read "Judgment
>> Under Uncertainty", and Lakoff's "Women, Fire and Dangerous Things"
>> ... sitting there on the shelf behind me.
>
>
> Okay, you passed a couple of spot-checks, you're not a complete waste of
> time.
>
> Though you still seem unclear on the realization that polymaths all
> study *different* fields, so there's nothing impressive about being able
> to name different communities. Anyone can rattle off the names of some
> obscure books they've read. It's being able to answer at least some of
> the time when someone else picks the question, that implies you're
> getting at least a little real coverage. You seem to have some myopia
> with respect to this, asking me why I was telling you to study
> additional fields when I hadn't even studied every single one you'd
> already studied. Some roads go on a long, long way. That *you* can
> name things you've studied isn't impressive. Having Lakoff on the shelf
> behind you does imply you're not a total n00bie, not because Lakoff is
> so important, but because I selected the question instead of you.
>
>>> Also known as Mainstream AI: the predicate logic users,
>>> connectionists, and artificial evolutionists. What they know about
>>> goal hierarchies and planning systems is quite different from what
>>> decision theorists know about expected utility maximization, though
>>> of course there's some overlap.
>>
>>
>> I was conjoining the decision theorists with the hard AI group, since
>> most of the AI people I know are perfectly familiar with the latter.
>
>
> Non sequitur. AI people may know some decision theory, it doesn't mean
> that decision theory is identical to AI.
>
> I would guess that not many AI people can spot-read the difference between:
>
> p(B|A)
> p(A []-> B)
>
>>>> COMMUNITY (C)
>>>>
>>>> Those to whom the term "edge of chaos" is not just something they
>>>> learned from James Gleick. These people are comfortable with the
>>>> idea that mathematics is a fringe activity that goes on at the
>>>> tractable edge of a vast abyss of completely intractable systems and
>>>> equations. When they use the term "non-linear" they don't mean
>>>> something that is not a straight line, nor are they talking about
>>>> finding tricks that yield analytic solutions to certain nonlinear
>>>> equations. They are equally comfortable talking about a national
>>>> economy and a brain as a "CAS" and they can point to meaningful
>>>> similarities in the behavior of these two sorts of system. Almost
>>>> all of these people are seriously well versed in mathematics, but
>>>> unlike the main body of mathematicians proper, they understand the
>>>> limitations of analytic attempts to characterize systems in the real
>>>> world.
>>>
>>>
>>> I'm not part of community C and I maintain an extreme skepticism of
>>> its popular philosophy, as opposed to particular successful technical
>>> applications, for reasons given in "A Technical Explanation of
>>> Technical Explanation".
>>
>>
>> You speak from a profound lack of depth in the one field where depth
>> of understanding is most important. You mistake "particular
>> successful technical applications" for the issues of most importance
>> to AI.
>>
>> There is nothing wrong with skepticism. If it is based on
>> understanding, rather than wilful, studied ignorance.
>
>
> "Shut up and learn" is a plaint to which I am, in general, prepared to
> be sympathetic. But you're not the only one with recommendations. Give
> me one example of a technical understanding, one useful for making
> better-than-random guesses about specific observable outcomes, which
> derives from chaos theory. Phil Goetz, making stuff up at random, gave
> decent examples of what I'm looking for. Make a good enough case, and
> I'll put chaos theory on the head of my menu.
>
> Many people are easily fooled into thinking they have attained some
> tremendously important and significant understanding of something they
> are still giving maxentropy probability distributions about, the qualia
> crowd being one obvious example. Show me this isn't so of chaos theory.
>
>>>> Those who could give you a reasonable account of where Penrose,
>>>> Chalmers and Dennett would stand with respect to one another. They
>>>> could easily distinguish the Hard Problem from other versions of the
>>>> consciousness issue, even if they might disagree with Chalmers about
>>>> the conclusion to be drawn. They know roughly what supervenience
>>>> is. The could certainly distinguish functionalism (various breeds
>>>> thereof) from epiphenomenalism and physicalism, and they could talk
>>>> about what various camps thought about the issues of dancing,
>>>> inverted and absent qualia.
>>>
>>>
>>> Sadly I recognize every word and phrase in this paragraph, legacy of
>>> a wasted childhood, like being able to sing the theme song from
>>> Thundercats.
>>
>>
>> Shame. There is valuable stuff buried in among the dross.
>
>
> If you've read _Technical Explanation_, you know my objection.
> Mysterious answers to mysterious questions. "Qualia" reifies the
> confusion into a substance, as did "phlogiston" and "elan vital".
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT