Re: Is complex emergence necessary for intelligence under limited resources?

From: Richard Loosemore (rpwl@lightlink.com)
Date: Wed Sep 21 2005 - 09:24:26 MDT


Ben Goertzel wrote:
> Richard,
>
> I think you have made a reasonably convincing argument why AGI should
> proceed via allowing a proto-AGI system to engage with a world via sensors
> and actuators, and construct its own symbols to represent the world around
> it. I agree that these symbols will generally not be simple, and also that
> sophisticated learning mechanisms will be required to learn them.
>
> What is not clear to me is why the "complexity" of these symbols and
> learning mechanisms necessarily has to entail "complexity" in the sense of
> complex systems theory (as opposed to just "complexity" in the sense of
> complicatedness and sophistication).

Your question makes me fear that I muddied the waters when I talked
about the complicatedness of symbols. And I was trying so hard, too!

The answer to your question was in the text, but alas it may have gotten
buried. When I talked about the "complicatedness" of the symbols, I was
summarizing something that I then (later on) tried to spell out in more
detail: I was postulating that when we try to get serious amounts of
high-level learning into an AGI (analogy-making and so on), we are
forced to put in precisely the kinds of tangled, reflexive,
self-modifying mechanisms that leads to complexity in the system as a
whole, and in complicatedness in the symbols.

Even without any other arguments, I was claiming, we should be deeply
worried that AGI programmers tend to shy away from trying to build those
very high-level learning mechanisms. Is it fair of me to say this? Do
they really shy away from them? I think they do. Many AGI folks talk
about such stuff as if it is next on the list after they get the basic
mechanisms sorted out - but it might also be that the real reason people
avoid them is that nobody has much idea how to build them *without*
straying into the domain of complex, self-modifying, tangled, recursive
(etc) systems. Let me put the same point, but coming from the other
direction: do you see any research groups throwing themselves full-tilt
into the problem of understanding those high-level learning (aka
concept-building or structure-finding) mechanisms? Oh boy, yes! For
just one example, look at the FARG group at Indiana U
(http://www.cogsci.indiana.edu/index.html). But these folks take the
complex-systems approach. They eat, drink and breathe complexity.

So it is not that the complicatedness of symbols means anything by
itself (I apologize for misleading you there) - a symbol, after all, is
a local mechanism, and the point of a complex system is that the
complexity is in the system as a whole, not in the local units. No,
what I meant was: the symbols have to be complicated precisely because
we need to make them develop by themselves using powerful (tangled,
complex) learning mechanisms. [The complicatedness of the symbols
raises a slightly different issue that I started to discuss, but for
clarity I will leave it aside here and come back to it if you wish].

I submit that most AGI people assume that they are going to be able to
crack the learning problem later, *without* having to resort to tangled
complexity. They believe that they will be able to invent all the
learning mechanisms required in an AGI without having to give those
learning mechanisms the power to trasnform the system as a whole into a
complex system.

I further submit that they believe the format for the symbols that they
are using now (relatively uncomplicated and interpretable, in my
terminology) will not be substantially affected by the later
introduction of those learning mechanisms. When I talked about the
symbols becoming more complicated, I was referring to this assumption,
saying that I believe that the later introduction of proper learning
mechanisms will actually affect the format of the symbols, and that the
change may be so huge that we may discover that the only way to get the
symbols to develop by themselves (supplied only with real world I/O and
no programmer intervention) is to to give them so much freedom to
develop that all the apparatus we put into in the symbols, that we
thought was so important, turns out to be redundant.

So all of this is about observing a process within the AI community.
Specifically, these observations. (a) I see people (over the course of
at least four decades now) concentrating on the mechanisms-of-thought in
non-grounded systems, and postponing the problem of building the kind of
powerful learning mechanisms that could generate the symbols that are
used in those mechanisms-of-thought. (b) I see a few people embracing
the problem of those powerful learning mechanisms, but those people take
a complex systems approach because all the indications are that the kind
of reflexive, self-modifying characteristics needed in such learning
mechanisms will lead to complex systems. Now, why do the latter group
go straight for complex systems? We need to be careful not to
trivialise their reasons for doing so: they don't do it just because
it's fun; they don't do it because they don't know any better; they
don't do it because they are mathematically naive wimps who have no
faith in the power of mathematics to grow until it can describe things
that people previously dismissed as to difficult to describe .... they
do it because they have an extremely broad range of knowledge, have come
at the problem from a number of angles, and have decided that there is a
very profound message coming from all the studies that have been done on
different sorts of complex system. And the message, as far as they are
concerned, is that *if* you are going to build a mechanism that captures
what appears to be an extremely reflexive, self-modifying and tangled
ability as the kinds of learning and concept building that are important
to them, *then* you had jolly well better get used to the idea that the
mechanism is going to be complex, because in the thousands upon
thousands of other examples of systems with that kind of tangledness, we
always observe an element of complexity.

So, from these two observations, I take away this conclusion. The
people who insist that we will be able to build powerful learning
mechanisms in an AGI *without* recourse to complexity, are precisely
those people who have not tried to build such mechanisms. They offer a
firm conviction that they will be able to do so, but they can offer
nothing except their blind faith in the future. [And where they do
attempt to build learning mechanisms, they only try relatively simple
kinds of learning (concept building) and they have never demonstrated
that their mechanisms are powerful enough to ground a broad-based
intelligence]. The only people who have ventured into this domain have
accepted the evidence that complexity is unavoidable, and they give
reasons why they think so.

The standard response to this argument (at least from some quarters in
this list), is that I have not given a demonstration why an AGI *cannot*
be built without complexity, or a demonstration of why complexity *must*
be necessary. This is a completely nonsensical demand: a rigorous
proof or demonstration is not possible: that is the whole point of my
argument! If I could give a rigorous mathematical or logical proof why
you cannot build an AGI without complexity, the very rigor of that proof
would invalidate my argument!

I most certainly have given a demonstration: it is an empirical one.
Look at all the examples of learning systems that work; look at the way
the non-complex AGI researchers run away from powerful learning
mechanisms and never demonstrate any convincing reasons to believe their
systems can be grounded; look at all the evidence from natural systems
that are intelligent, but which do not use crystalline, non-complex
thinking and learning mechanisms; look at all the systems which have
large numbers of interacting, self-modifying, adaptive components that
interact with the world (not just intelligent systems, but all the
others) and ask yourself why it is that we cannot find any examples
where someone could start by observing the global behavior of the system
and then reason back to the local mechanisms that must have given rise
to that behavior.

And at the end of all this observing, ask yourself what reason an AGI
researcher has to denigrate the human mind as a lousy design (even
thoughthey don't understand the design!) and say that they can do it
better without introducing complexity, without offering any examples of
working (grounded) systems to back up their claim, and without offering
any mathematical proofs or demonstrations that their approach will one
day work, when they get the learning and grounding mechanisms fully
worked out.

The boot, I submit, is on the other foot. The rest of the community is
asking the non-complex AGI folks [apologies for the awkward term: I am
not sure what to call people who eschew complexity, except perhaps the
Old Guard] why *we* should go along with what looks like their blind
faith in being able to build a fully capable, grounded AGI without
resorting to complexity.

*****

I have concentrated on one aspect of the learning mechanisms (the
expected tangledness, if you like) because this is the most obvious
thing that would lead to complexity. However, this is not the only
plank of the argument. I have brought up some of these other issues
elsewhere, but it might be better for me to organize them more
systematically, rather than throw them into the pot right now.

In closing, let me say that I look very negative when presenting this
argument, even though I actually do have concrete suggestions for what
we should do instead. Some people have jumped to false conclusions
about what I would recommend us doing, if the above were true: the fact
is, I have barely even mentioned what I think we should be doing. Right
now, my goal is to suggest that here we have an issue of truly enormous
importance, and that we should first of all accept that it really is an
issue, then go on to talk about what can be done about it. But I want
to get to first base first and get people to agree that there is an issue.

Richard Loosemore.

> Clearly, it does in the human brain.
> But you haven't demonstrated that it does in an AGI system, nor have you
> really given any arguments in this direction.
>
> Personally I suspect you are probably right and that given limited resources
> complex-systems-style "complexity" probably IS necessary for effective
> symbol grounding. But I don't have a strong argument in this direction,
> unfortunately, just an intuition.
>
> -- Ben
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT