From: Richard Loosemore (rpwl@lightlink.com)
Date: Tue Oct 25 2005 - 08:44:22 MDT
Russell Wallace wrote:
> Agreed. Richard, I think you and the people you're debating with are
> mostly talking past each other, because you're using language that just
> isn't up to the job. I'd be interested in seeing a draft specification
> for the tools/framework/whatever you want to build, with specifics on
> how you think it would help; if you could write up something like that,
> we could at least provide more constructive criticism.
Everyone wants to see a draft specification. Under other circumstances
this might be understandable, but if nobody understands my *reason* for
suggesting the type of development environment that I outlined, then it
would be a waste of time for me to lay out the spec, because it would be
judged against all the wrong criteria. And if, on the other hand,
people did completely get the motivation for the environment, then the
details would be much less important.
I am distressed, because the common thread through all the replies so
far has been an almost total miscomprehension of the basic reason why I
suggested the environment. And this is not entirely my fault, because I
have looked back over my writing and I see that the information was
clearly stated in the posts. I suspect that many people have too little
time to do more than skim-read the posts on this list, and as a result
they get incredibly superficial ideas about what was said.
I am going to split this message at this point, because I am getting
close to the end of my tether.
For anyone who reads the below explanation and still finds no spark of
understanding, I say this: go do some reading. Read enough about the
world of complex systems to have a good solid background, then come back
and see if this makes sense. Either that, or go visit with the folks at
Santa Fe, or bring those folks in on the discussion. I am really not
going to beat my head against this any more.
I will try a different way to illustrate the underlying reasoning.
First, I need to ask you to accept one hypothetical... you're going to
have to work with me here and not argue against this point, just accept
it as a "what if". Agreed?
Here is the hypothetical. Imagine that cognitive systems consist of a
large number of "elements" which are the atoms of knowledge
representation (an element represents a thing in the world, but that
"thing" can be concrete or abstract, a noun-like thing or an action or
process .... anything whatsoever). Elements are simple computational
structures, we will suppose. They may have a bit of machinery inside
them (i.e. they are not passive data structures, they are active
entities), and they have connections to other elements (a variety of
different kinds of connections, including transient and long-term). For
the most part, all elements have the same kind of structure and code
inside them, but different data (so, to a first approximation, an
element is not an arbitrary piece of code, like an Actor, but more like
an elaborate form of connectionist "unit").
The most important aspect of an element's life is its history. When it
first comes into being, its purpose is to capture a particular
regularity (the co-occurence of some other elements, perhaps), and from
then on it refines itself so as to capture more precisely the pattern
that it has made its own. So, when I first encounter a dog, I might
build an element that represents "frisky thing with tail that barks and
tries to jump on me", and then as my experience progresses, this concept
(aka element) gets refined in all the obvious ways and becomes
sophisticated enough for me to have a full blown taxonomy of all the
different types of dogs.
Notice one important thing: the specific form of the final dog-element
is a result of (a) a basic design for the general form of all elements,
and (b) the learning mechanisms that caused the dog-element to grow into
the adult form, as a result of experience.
Now, moving on from this set of basic assumptions, let's consider what
might happen if someone (a cognitive scientist) were to try to figure
out how this cognitive system works, given very poor access to its
functioning.
The scientist might start out by declaring that they have an idea for
the format of the adult elements, derived from various ideas about what
ideas and concepts are, and how they relate to one another. What is
more, the scientist might discover that they can do quite well at first:
they can write out some supposed adult elements (including both the
presumed form, and the specific content), specify how they interact with
each other, and show that IF they give the system that innate knowledge,
they can get the system to show a certain amount of intelligent behavior.
But now along comes a complex systems theorist to spoil the party. The
CST says "This looks good, but in a real system those adult elements
would have developed as a result of the learning mechanisms interacting
with the real world, right? And the system would recognise real-world
patterns as a result of the recognition mechanisms (which also got
developed as a result of experience) operating on raw sensory input, right?"
The cognitive scientist agrees, and says that the learning mechanisms
are a tough issue, and will be dealt with later.
But now the complex systems theorist looks a little concerned, and says
"I understand that there is a distinction between the structure (the
basic design) of the elements and their specific content, and I do
understand that while the content changes during development, the
structure of the elements does not.... but having said that, doesn't the
element structure (as well as the structure of the "thinking" and
"reasoning" mechanisms) have to be chosen to fit the learning
mechanisms, not the other way around?"
And the cognitive scientists replies that, no, there is no reason to do
that: we can easily study the knowledge representation and thinking and
reasoning mechanisms first, then later on develop appropriate learning
mechanisms that produce the content that goes inside that stuff.
But now the complex systems theorist is really worried. "Hang on: if
you build learning mechanisms with the kind of power you are talking
about (with all that interaction and so on), you are going to be
creating the Mother of all complex systems. And what that means is, to
get your learning systems to actually work and stably generate the right
content, you will eventually have to change the design of the elements.
Why? Because all our experience with complex systems indicates that if
you start by looking at the final adult form of a system of interacting
units like that, and then try to design a set of local mechanisms
(equivalent to your learning mechanisms in this case) which could
generate that particular adult content, you would get absolutely
nowhere. So in other words, by the time you have finished the learning
mechanisms you will have completely thrown away your initial presupposed
design for the structure and content of the adult elements. So why
waste time working on the specific format of the element-structure now?
You would be better off looking for the kinds of learning mechanisms
that might generate *anything* stable, never mind this presupposed
structure you have already set your heart on."
And that's where I come in.
The development environment I suggested would be a way to do things in
that (to some people) "backwards" way. It addresses that need, as
expressed by my hypothetical complex systems theorist, to look at what
happens when different kinds of learning mechanisms are allowed to
generate adult systems. And it would not, as some people have
insultingly claimed, be just a matter of doing some kind of random
search through the space of all possible cognitive systems ... nothing
so crude.
You can dispute that the above characterization of cognitive systems is
correct. All power to you if you do: you will never get what I am
trying to say here, and there would be no point me talking about the
structure of the development environment.
I rest my case.
Richard Loosemore.
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:23:18 MST