Loosemore's Collected Writings on SL4 - Part 7

From: Richard Loosemore (rpwl@lightlink.com)
Date: Sat Aug 26 2006 - 20:37:56 MDT


[begin part 7]

************************************************************
* *
* Software Development Environments for AGI *
* *
************************************************************

Ben Goertzel wrote:
> Richard,
>
> It's true that making an AGI, given current software
> technology, is a big pain, and it takes a long time to
> get from vision to implementation.

> I agree that better software tools would help make
> the process a lot easier, even though I have a feeling
> your vision of better software tools is a bit
> overidealistic.
>
> However, I have chosen to focus on AGI itself rather
> than on building better tools, because I've judged
> that given my limited resources, I'll probably
> get to AGI faster via focusing on AGI than via
> focusing on tools first.
>
> While tools work is conceptually easier than AGI
> work by far, it still requires a lot of thought and
> a lot of manpower.
>
> I would be more interested in your tools ideas if
> they were presented in a more concrete way.

But it would be a misunderstanding to treat my suggestion as "here is a
a possible good way to build an AGI." If it were that sort of
suggestion, I would be just one of a hundred tool designers with (what
they thought were) great ideas.

I am saying something much more serious. I am saying that we *need* to
do things this way. We will eventually realise that anything else is not
going to work.

We have to build systems that grow their own representations, we cannot
presuppose those representations and then, later, tack on some learning
mechanisms that will feed those representations with new knowledge. This
fundamental point is crucial to my argument, so make sure that you are
absolutely clear about that before we discuss fine details about the
environment.

Richard Loosemore

************************************************************
* *
* The Complex Systems Critique (Again) *
* *
************************************************************

This reply is primarily directed at HC and Ben Goertzel, who have given
two of the most insightful responses to what I wrote.

I won't quote your specific posts back at you, because I am trying not
to pollute the discussion with too many n-th order quotes. Instead, I
have read what you say and I will try to reply to the spirit of it.

There are a couple of subtle traps that we are falling into sometimes
when we talk about the relevance of "complex systems" to AGI design. The
second brings us right to the heart of the issue; the first is easier,
so I'll deal with that first.

The first trap is to think that I am advocating something at the level
of using specific mathematics, or known CAS systems, or accepted CAS
theories (such as they are) to be the new basis of AGI research.

Not at all: I am merely taking a fairly simple result, applying it to
cognitive systems, and coming to a conclusion about strategy. Then I'm
outa there: bye bye Santa Fe, back to work.

The second trap is much harder to state, but I'll try. It involves a
distinction between three things that I might be saying, only one of
which is true:

(1) Am I saying that the thinking and reasoning mechanisms (the ones to
be found in an adult system) are acting as a complex system on a moment
by moment basis? In other words, if we could look at the local,
low-level functioning of those mechanisms would we find a complex
systems disconnect between that level and (global) thinking and
reasoning behavior? NO! I am not saying that. I think this is possible,
but that is not my claim at the moment: Iam neutral about this issue.

(2) Am I saying .... exactly ditto, but about the learning mechanisms
(the things that build new concepts as a result of experience)? If we
looked at the concept-building mechanisms, would we find that we could
not relate local to global? Again: NO! I am not saying that; I am
neutral about that also.

(3) What I am trying to talk about is the way that the learning
mechanisms interact with a real world environment over the course of the
system's lifetime learning, generating all the knowledge that the system
has as an adult. This is a long-term process, and the outcome, the end
result of this process is going to be governed by the cumulative result
of some very CAS-like mechanisms (the learning mechanisms) interacting
with the world. Here is where I find the trouble. This process,
considered as a system, contains at least the possibility of a
complex-systems-like disconnect between local mechanisms (the learning
mechanisms proper) and global behavior (the knowledge generated by the
mechanisms, by the time system gets to be an adult). It is not a matter
of moment by moment disconnect, it is a lifetime disconnect.

To illustrate with an example of what might happen: you could insert
your chosen learning mechanisms, let them interact with the world, and
then be surprised at the end of the day to find that, say, the system
just never managed to get certain kinds of abstraction; and when you
tried to figure out why this was happening, there might be nothing local
that you could put your finger on. You would simply be getting something
wrong at the end of the process, but because it was a result of a
long-term interaction (i.e. a complex system effect), you might not be
able to attack it directly.

Now, in one sense all I want to do is get people to discuss this latter
possibility. Just the possibility! I want someone to acknowledge that
this might turn out to the way things happen. We might not have
seriously run up against the problem yet, because enobody has subjected
their AGI model to the test of getting it to build almost all of its
knowledge using just the combination of learning mechanisms and messy
real-world experience. Is that not also agreed? That no one has really
done an end-to-end test of a real, non-toy, general knowledge-
acquisition mechanism yet?

And if this is true (that nobody has done such a test yet), is it not
true that if my hypothesis is true, the only way we might to start to
really notice the effect is after we had done a few long-term learning
runs and noticed that the learning mechanisms were simply not working?

You might say: why expect trouble when we have no reason to believe that
there will be trouble. My response has been: if the learning mechanisms
have the characteristics we generally think of them as having, and if as
a result they look like they will display the usual complex systems
disconnect between local and global, the experience of the complex
systems community is that it would be truly astonishing if we did not
have trouble.

Finally, I am using this argument as a reason to adopt a new research
*paradigm* (exemplified by my suggested development environment and
methodology), not a particular *model of cognition*. I was very clear
about this, but a number of people have persistently and viciously
slammed my words because they say the model [sic] I have proposed is
stupidly vague. There was never any such model.

I think this is the clearest I have managed to state the argument.

I might add that anyone who advances a thesis in this kind of forum is
always caught between a rock and a hard place: if I am brief, my wording
is so concise that it lends itself nicely to misinterpretation by people
who take one paragraph at a time and criticise out of context; but if I
give a long, detailed account I am accused of being long-winded. Damned
if I do, damned if I don't.

Richard Loosemore.

[end part 7]



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT