Loosemore's Collected Writings on SL4 - Part 5

From: Richard Loosemore (rpwl@lightlink.com)
Date: Sat Aug 26 2006 - 20:36:42 MDT


[begin part 5]

************************************************************
* *
* The Complex Systems Critique (Again) *
* *
************************************************************
> Eliezer S. Yudkowsky wrote:
>
>> Richard Loosemore wrote:
>> As far as I am concerned, the widespread (is
>> it really widespread?) SL4 assumption that
>> "strictly humanoid intelligence would not
>> likely be Friendly ...[etc.]" is based on a
>> puerile understanding of, and contempt of,
>> the mechanics of human intelligence.
>
> Untrue. I spent my first six years from 1996 to 2002
> studying the mechanics of human intelligence, until I
> understood it well enough to see why it wouldn't work. I
> suppose that in your lexicon, "Complex Systems Theory"
> and "mechanics of human intelligence" are synonyms.
> In my vocabulary, they are not synonyms, and studying
> such mere matters as neuroscience and cognitive
> psychology counts as trying to understand the mechanics
> of human intelligence, whatever my regard for
> "Complex Systems Theory" as a source of useful,
> predictive, engineering-helpful hypotheses about
> human intelligence. Disdain for your private theory of
> human intelligence is not the same as disdain for
> understanding the mechanics of human intelligence.

Once again, you demonstrate my point for me....

1) You deliver irrelevant insults:

> I suppose that in your lexicon, "Complex Systems Theory"
> and "mechanics of human intelligence" are synonyms.

I don't need to respond to this (it is too obviously silly).

  2) You make assertions about things you know nothing about:

> Disdain for your private theory of human intelligence
> is not the same as disdain for understanding the
> mechanics of human intelligence.

I have never discussed my "private theory of human intelligence" on this
list, so how could you possibly disdain it? I have discussed one issue
only. You responded to that issue with personal insults, red herrings
and numerous comments that showed you did not know the difference
between complex systems theory and chaos theory (a confusion that
rendered most of your other statements worthless).

3) And you demonstrate the most amazing contempt for the subject and all
the other people who study it:

> I spent my first six years from 1996 to 2002 studying the
> mechanics of human intelligence, until I understood it well enough to
> see why it wouldn't work.

To be able to justify your assertion that "it" wouldn't work, you would
have to do experiments (simulations) and/or produce some theoretical
ideas, and *most important of all* you would also have to make coherent
replies to those who find fault with your position. I have read your
writings on the subject: I can find nothing but rambling,
stream-of-consciousness speculations. Where are your experiments? Where
are your coherent arguments in support of this claim? Where are your
coherent replies to your critics?

I was one of those critics. I produced arguments based on a vast body of
empirical data. You made no coherent response to those arguments, never
demonstrating that you even comprehended what the arguments actually
were about In fact, looking back over the sum total of words you wrote
against the position that I elaborated earlier, I can find nothing that
is not either irrelevant posturing, a dismissal of an entire body of
research on the grounds that you consider it worthless (this being
exactly the contempt that I referred to above), an ad hominem attack on
either my credentials or those of hundreds of other researchers (whose
work you confuse with work going on in another field), or a blatant
non-sequiteur.

  The specific point I made in my post above was a small one: I was
referring to the way that some people here tend to assert that the
structure of the human mind is clearly sub-optimal, or clearly flawed,
or clearly not the best way to design an AGI, or clearly bad from the
point of view of guaranteeing Friendliness. This is an entirely
debatable point of view, but when challenged, a vocal subset of SL4
likes to respond not with arguments, but with the kind of invective that
you just delivered.

************************************************************
* *
* Building an AGI using Systematic Experimentation *
* *
************************************************************

1) "Prove" that an AGI will be friendly? Proofs are for mathematicians.
I consider the use of the word "proof," about the behavior of an AGI, as
on the same level of validity as the use of the word "proof" in
statements about evolutionary proclivities, for example "Prove that no
tree could ever evolve, naturally, in such a way that it had a red
smiley face depicted on every leaf." Talking about proofs of
friendliness would be a fundamental misunderstanding of the role of the
word "proof". We have enough problems with creationists and intelligent
design freaks abusing the word, without us getting confused about it too.

If anyone disagrees with this, it is important to answer certain
objections. Do not simply assert that proof is possible, give some
reason why we should believe it to be so. In order to do this, you have
to give some coherent response to the arguments I previously set out (in
which the Complex Systems community asked you to explain why AGI systems
would be exempt from the empirical regularities they have observed).

2) Since proof is impossible, the next best thing is a solid set of
reasons to believe in friendliness of a particular design. I will
quickly sketch how I think this will come about.

First, many people have talked as if building a "human-like" AGI would
be very difficult. I think that this is a mistake, for the following
reasons.

I think that what has been going on in the AI community for the last
couple of decades is a prolonged bark up the wrong tree, and that this
has made our lives more difficult than it should be.

Specifically, I think that we (the early AI researchers) started from
the observation of certain *high-level* reasoning mechanisms that are
observable in the human mind, and generalized to the idea that these
mechanisms could be the foundational mechanisms of a thinking system.
The problem is that when we (as practitioners of philosophical logic)
get into discussions about the amazing way in which "All Men Are Mortal"
can be combined with "Socrates is a Man" to yield the conclusion
"Socrates is Mortal", we are completely oblivious to the fact that a
huge piece of cognitive apparatus is sitting there, under the surface,
allowing us to relate words like "all" and "mortal" and "Socrates" and
"men" to things in the world, and to one another, and we are also
missing the fact that there are vast numbers of other conclusions that
this cognitive apparatus arrives at, on a moment by moment basis, that
are extremely difficult to squeeze into the shape of a syllogism. In
other words, you have this enormous cognitive mechanism, coming to
conclusions about the world all the time, and then it occasionally comes
to conclusions using just *one*, particularly clean, little subcomponent
of its array of available mechanisms, and we naively seize upon this
subcomponent and think that *that* is how the whole thing operates.

By itself, this argument against the "logical" approach to AI might only
be a feeling, so we would then have to divide into two camps and each
pursue our own vision of AI until one of us succeeded.

However, the people on my side of the divide have made our arguments
concrete enough that we can now be more specific about the problem, as
follows.

What we say is this. The logic approach is bad because it starts with
presumptions about the local mechanisms of the system and then tries to
extend that basic design out until the system can build its own new
knowledge, and relate its fundamental concepts to the sensorimotor
signals that connect it to the outside world.... and from our experience
with complex systems we know that that kind of backwards design approach
will usually mean that the systems you design will partially work but
always get into trouble the further out you try to extend them. Because
of the complex-systems disconnect between local and global, each time
you start with preconceived notions about the local, you will find that
the global behavior never quite matches up with what you want it to be.

So in other words, our criticism is *not* that you should be looking for
nebulous or woolly-headed "emergent" properties that explain cognition
-- that kind of "emergence" is a red herring -- instead, you should be
noticing that the hardest part of your implementation is always the
learning and grounding aspect of the system. Everything looks good on a
small, local scale (especially if you make your formalism extremely
elaborate, to deal with all the nasty little issues that arise) but it
never scales properly. In fact, some who take the logical approach will
confess that they still haven't thought much about exactly how learning
happens ... they have postponed that one.

This is exactly what has been happening in AI research. And it has been
going on for, what, 20 years now? Plenty of theoretical analysis. Lots
of systems that do little jobs a little tiny bit better than before. A
few systems that are designed to appear, to a naive consumer, as though
they are intelligent (all the stuff coming out of Japan). But overall,
stagnation.

So now, if this analysis is correct, what should be done?

The alternative is to do something that has never been tried.

Build a development environment that allowed rapid construction of large
numbers of different systems, so we can start to empirically study the
effects of changing the local mechanisms. We should try
cognitively-inspired mechanisms at the local level, but adapt them
according to what makes them globally stable. The point is not to
presuppose what the local mechanisms are, but to use what we know of
human cognition to get mechanisms that are in the right ballpark, then
experimentally adjust them to find out under what conditions they are
both stable and doing the things we want them to.

I have been working on a set of candidate mechanisms for years. And also
working on the characteristics of a software development environment
that would allow this rapid construction of systems. There is no hiding
the fact that this would be a big project, but I believe it would
produce a software tool that all researchers could use to quickly create
systems that they could study, and by having a large number of people
attacking it from different angles, progress would be rapid.

What I think would happen if we tried this approach is that we would
find ourselves not needing enormous complexity after all. This is just a
hunch, I agree, but I offer it as no more than that: we cannot possibly
know, until we try such an approach, if we find a quagmire or an easy
sail to the finish.

But I can tell you this: we have never tried such an approach before,
and the one thing that we do know from the complex systems research (you
can argue with everything else, but you cannot argue with this) is that
we won't know the outcome until we try.

(Notice that the availability of such a development environment would
not in any way preclude the kind of logic-based AI that is now the
favorite. You could just as easily build such models. The problem is
that people who did so would be embarrassed into showing how their
mechanisms interacted with real sensory and motor systems, and how they
acquired their higher level knowledge from primitives.... and that might
be a problem because in a side by side comparison I think it would be
finally obvious that the approach just simply did not work. Again
though, just by hunch. I want the development environment to become
available so we can do such comparisons, and stop philosophizing about it.)

Finally, on the subject that we started with: motivations of an AGI. The
class of system I am proposing would have a motivational/emotional
system that is distinct from the immediate goal stack. Related, but not
be confused.

I think we could build small scale examples of cognitive systems, insert
different kinds of M/E systems in them, and allow them to interact with
one another in simple virtual worlds. We could study the stability of
the systems, their cooperative behavior towards one another, their
response to situations in which they faced threats, etc. I think we
could look for telltale signs of breakdown, and perhaps even track their
"thoughts" to see what their view of the world was, and how that
interacted with their motivations.

And what we might well discover is that the disconnect between M/E
system and intellect is just as it appears to be in humans: humans are
intellectual systems with aggressive M/E systems tacked on underneath.
They don't need the aggression (it was just useful during evolution),
and without it they become immensely stable.

I think that we could also understand the nature of the "attachment"
mechanisms that make human beings have irrational fondness for one
another, and for a species as a whole, and incorporate that in a design.
I think we could stud the effects of that mechanism, and come to be sure
of its stability.

And, at the end of the day, I think we will come to understand the
nature of M/E systems so well that we will be able to say with a fair
degree of certainty that the more knowledge an AGI has, the more it
tends to understand the need for cooperation. I think we might (just
might) discover that we could trust such systems.

But we have to experiment to find out, and experiment in a way that
nobody has ever done before.

  Richard Loosemore.

[end part 5]



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT