RE: The Relevance of Complex Systems

From: Ben Goertzel (ben@goertzel.org)
Date: Thu Sep 08 2005 - 17:45:23 MDT


> > But what about hypothesis formation? The space of possible hypotheses
> > is very large. Most of AI is devoted to search techniques for
> > searching hypothesis space, and these techniques suck in an AGI context.
>
> Again, very important question, wish I could discuss it more. But for
> the purposes of this argument the question is whether any useful
> mechanism of hypothesis search would inevitably introduce 'Complexity'
> that would render the system thoroughly impossible to predict. I am
> not aware of any such mechanism.

Here is my specific hypothesis.

Among the many probabilistic inference problems facing an intelligent system
is one particularly critical problem: the problem of attention allocation.
That is: the problem of what to pay attention to at a given point in time.

And, one particular aspect of this problem is the problem of "inference
control" -- i.e., of the many different paths an inference system may take
from any given knowledge base, given its set of inference rules, it must
choose a small subset of these paths to actually pursue, given its resource
limitations.

Inference control and other attention allocation problems are easily
formulated as probabilistic inference problems -- and they are not easy
ones. To solve these probabilistic inference problems effectively in
nontrivial situations requires a robust hypothesis formation approach.

My conjecture is that any useful mechanism of hypothesis search inserted
specifically into the inference mechanism involved in attention allocation /
inference control, is going to introduce complex dynamics that render the
system extremely difficult to predict.

The same problem occurs even worse when one steps beyond inference control
to deal with the issue of automated learning of new approximate inference
heuristics. The learning of such heuristics requires probabilistic
reasoning coupled with creative hypothesis formation, and again, any useful
hypothesis formation heuristic inserted here is likely to lead to
unpredictable dynamics on the part of the whole system.

The unpredictability in these cases comes from the recursiveness of the
situation and the existence of a positive Liapunov exponent: I.e.,
uncertainty of amount x regarding the dynamics of the hypothesis formation
component of "attention allocation / inference control / inference rule
learning" will generally lead to uncertainty of amount > x in the rest of
the system. Thus small uncertainties will propagate and increase over
time -- and small uncertainties WILL exist because all useful hypothesis
formation heuristics are going to be stochastic to some extent.

THIS is "chaos theory", or at least, a qualitative heuristic argument
inspired by chaos theory. I am curious to know how you folks intend to get
around it.

-- Ben Goertzel



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT