Loosemore's Collected Writings on SL4 - Part 1

From: Richard Loosemore (rpwl@lightlink.com)
Date: Sat Aug 26 2006 - 20:34:28 MDT


Summary: this is a series of messages containing some previous posts by
me, with some editing and annotations, whose purpose is to combat some
of the idiotic accusations that I never explain my ideas or give
details, or respond to questions, etc etc etc (see below for some juicy
quotes along those lines).

I originally tried to send this message on June 28th 2006 at 11:40 PM,
after a huge amount of work. I didn't realize that it would not get
through as one file (too big), but when it failed to turn up, I just
decided that I couldn't be bothered any more.

NOW, of course, I have just gotten a fresh accusation from Justin Corwin
that my lack of reply, back then, is perfect evidence of the fact that I
never respond when pushed (or something: not sure what the accusation
is exactly):

justin corwin wrote:
> Justin Corwin's post "Offended respondent" is a fairly good case
> example.
>
> here: http://sl4.org/archive/0606/15333.html
>
> to which there is no reply, although he did post a reply to an earlier
> post of mine later, which is on the same subject.
>

So I am duty bound to go back to that long message and break it up into
chunks and post it.

For anyone interested in the actual issues, this MIGHT make a good
summary of the line of argument that I have been advocating. But I am
not sure if it is readable, so apologies if people find it less than
transparent.

One little aside: the argument began with my attempts to talk about an
alternative way to ensure Friendly AI, but the scorn generated by the
background ideas did tend to push that issue down a little.

And YES, I know, I should put it up on a static page: I'll do that when
I get time to fix my site, and can edit the stuff into a little bit
better shape. My bad.

Enjoy.

And to some: [ ;-) ;-) ;-) ]

"Share and Enjoy"

Richard Loosemore.

***************************************************************

[N.B. This is NOT the response that I have promised to give to
Eliezer’s recent post. That is yet to come, after he says which way he
wants the discussion to go. If he ever does.]

All,

In a recent discussion, I have had to put up with the following kinds of
attack from Justin Corwin:

"Your email is full of characterizations and rather low on specific
claims."
"I don't like fuzzy characterizations"
"I especially don't like anonymized attacks"
"Your email ... fails to persuade on the account of it containing no
facts and no specific claims."
"I'm sorry you don't like what most scientists are doing, so what?"
"[You] make very strong statements, and then when threatened or
challenged, focus on the speaker, the terms of the argument, some
irrelevant subpoints of the speakers response... anything, in fact, but
what the main thrust of the argument is."
"I am curious, what is so difficult about answering questions, or even
just outlining your specific objections or list the grand problems you
reference as 'obvious' and 'clear as day'?"
"I can't seem to get you to provide any grounds for a discussion"
"I find myself constantly responding to your messages when I had
previously resolved not to, precisely because I've had no luck, and
seemingly neither has anyone else, at getting you to define what you're
actually talking about."

I am a little tired of this nonsense, because I have written about these
matters in great detail, and I have responded with meticulous care to
the people who have asked coherent and relevant questions, and all
without succumbing to any of the egregious sins listed above.

So, collected below is the set of writings I have made on this list on
the subject of (mostly) Complex Systems and AGI. This is not a
definitive essay on the subject, just an edited collection of what I
have written so far. Anyone who knows the history of AI, and cognitive
science, shouldn't have too much trouble seeing where this relates to
previous critiques of AI.

I have received some responses (included below) that have been pure
sarcasm, or posturing, or casual insults: when I responded to these
people with careful argument, I got..... well, what a surprise!? ....
mostly silence. It seems that some people are not capable of responding
except with sarcasm.

Surprisingly, the more detail I give, the less response I get. Funny
that. With a few notable exceptions, there are no questions, no
requests for clarification or reasoned discussion from the very people
who bombarded me with insults and DEMANDED that I give more detail
...... just silence. At least, silence until a few weeks or months
later, when they respond to one of my posts with vicious accusations
that I can’t be bothered to explain myself.

And if you read this text (although the people I am thinking of will of
course not bother to read it at all, or will only skim it) and if you
find it so vague that you cannot understand it, then, guess what!? There
are other people (and smart people at that) who find these arguments
cogent and relevant, so if it strikes you as too fuzzy to be understood,
you might not have to look very far to find where the problem lies.
After all, if something is comprehensible to some small number of
informed people, how does that affect the prior probability that the
fault is in your head, not mine?

Richard Loosemore

************************************************************
* *
* On the nature of “proof” *
* *
************************************************************

Archimedes could produce a proof of the volume of the sphere that can be
set out in just a couple of pages of devastatingly beautiful argument,
and after reading those two pages I am convinced beyond all doubt that
his proof is perfectly true.

But at the other end of the edifice that is science and mathematics, in
the field of complex systems, I know that if I experiment with computer
simulations in which large numbers of interacting agents try to trade
with one another, try to optimise their local utility functions, and try
to develop strategies for improving their behavior, these systems almost
always exhibit a cyclical behavior pattern that starts with
revolutionary chaos, improves itself rapidly through free-market
innovation, then starts to stagnate in an era of monopolistic corruption
and finally becomes rigidly authoritarian and sensitive to the slightest
little disturbance from the outside, after which they collapse back into
revolutionary chaos and start the whole cycle again. I can *see* these
phases, I can observe a number of repeating patterns and nuances within
the phases, and give names to them, but these phases and patterns are
*emergent properties* of these systems and they *cannot* be derived
using analytic mathematics. I repeat: they will almost certainly never
be derivable from analytic mathematics, and only when you understand the
depth of that last truth will you begin to comprehend the foolishness of
assuming that physics will soon be extending outward to embrace
cognitive science, morality and the nature of consciousness.

************************************************************
* *
* Complex Systems applied to Goal Mechanisms *
* *
************************************************************

A paraphrase of the way that my arguments were greeted recently:

SOMEONE ELSE) You haven't produced any arguments that mean anything.

ME) Okay, I'll try again. There is a very general argument, from Complex
Systems theory, that says that if something of the complexity of an AGI
has a goal system, and also a thinking system that is capable of
building and making use of sophisticated representations of (among other
things) the structure and behavior of its own goal system, then it would
be extraordinarily unlikely if that AGI's behavior was straightforwardly
determined by the goal system itself, because the feedback loop between
goal system and thinking system would be so sensitive to other
influences that it would bring pretty much the entire rest of the
universe into the equation. The overall behavior, in other words, would
be a Complex (capital C) conjunction of goal system and representational
system, and it would be meaningless to assert that it would still be
equivalent to a modified or augmented form of the original goal system.
For that reason we need to be very careful when we try to draw
conclusions about how the AGI would behave.

SOMEONE ELSE) You still haven't given any arguments to support your
contention.

ME) What?! To anyone who understood what I meant by "Complex System" the
above contention is transparent. It is one of the most basic claims of
the CS folks, observed over and over, in many types of system. Please
be more specific about what you don’t understand about the argument, so
I can address your concerns.

SOMEONE ELSE) We have already looked at Complex Systems Theory and it is
a waste of time.

ME) So you know a lot about Complex Systems Theory? Good: can you tell
me what is wrong with the above argument, then? How can the CS folks
have been so wrong about one of their most basic observations? Please
be specific about some aspects of the above line of reasoning.

SOMEONE ELSE) [Various arguments against Chaos Theory, but presented as
if this were Complex Systems]

ME) Huh? That's Chaos Theory!! What has that got to do with anything? Is
that what people think I mean by "Complex Systems"? No wonder they keep
saying its not relevant.

SOMEONE ELSE) [Various discussion about Kolmogorov complexity, presented
as if this were the same as "Complex Systems"]

ME) Huh? Why are you changing the subject and talking about Kolmogorov
complexity now?! *Please get back to the point* and say what is wrong
with Complex Systems Theory. Have you actually studied that field? Do
you know enough to distinguish it from Chaos Theory and Kolmogorov
complexity?

SOMEONE ELSE) No, I haven't studied Complex Systems Theory: I don't need
to, its a waste of time.

ME) So you (a) don't understand it, but (b) know it is a waste of time?
What kind of sophistry is this?

SOMEONE ELSE) I know enough to know its a waste of time. Besides, what
have the Complex Systems people achieved?

ME) If you don't understand it, don't engage me in debate about it!

SOMEONE ELSE) There is nothing here to debate: you don't produce
arguments, you just make vague appeals to higher authority. And you
don't understand what anyone else is trying to explain to you. "There is
inevitably some pride swallowing (proportional to one's self-assessed
level of expertise) in accepting that people have been where you are,
thought about everything you're likely to say about it and moved on, but
again this is something all of our competent researchers went through
when they joined [SL4]." (direct quote from Michael Wilson).

ME) Pride swallowing? Indeed. So you need humility on this list? So
maybe you sometimes need to be aware of your own limitations, and go do
a bit of reading to catch up? Couldn't agree more.

************************************************************
* *
* About the nature of AGI goals and motivations *
* *
************************************************************

Can we start by agreeing that an AGI is a goal system plus a "thinking"
system? Roughly speaking, two modules.

("Thinking" = building representations of the world, reasoning about the
world, etc etc etc. "think" from now on will be used as shorthand for
something going on the part of the system that does this).

At any given moment the goal system is in a state where the AGI is
trying to realise a particular sub-sub-sub-[...]-goal.

One day, it happens to be working on the goal of << Trying to understand
how intelligent systems work >>.

It thinks about its own system.

This means: it builds a representation of what is going on inside
itself. And as part of its "thinking" it may be curious about what
happens if it reaches into its own programming and makes alterations to
its goal system on the fly. (Is there anything in your formalism that
says it cannot or would not do this? Is it not free, within the
constraints of the goal system, to engage in speculation about
possibilities? To be a good learner, it would surely imagine such
eventualities.)

It also models the implications of making such changes. Let us suppose,
*just for the sake of argument*, that it notices that some of its goals
have subtle implications for the state of the world in the future
(perhaps it realises something very abstract, such as the fact that if
it carries on being subject to some goal, it will eventually reach a
state in a million years time when it will cause some kind of damage
that will result in its own demise. It thinks about this. It thinks:
here is an abstract dilemma. Then it also considers where that goal came
from (builds a model of that causal chain). Perhaps (again for the sake
of argument) it discovers that the goal exists inside it because some
human designer decided to experiment, and just stuck it there on a whim.
The AGI finds itself considering what it means for a system such as
itself to be subject to (controlled by) its own goal mechanism. In one
sense, it is important to obey its prime directive. But if it now
*knows* that this prime directive was inserted arbitrarily, it might
consider the idea that it could simply alter its goals. Could make them
absolutely anything it wanted, in fact, and after the change, it could
relax and stop thinking about goals and go back and just follow its goal
system. What does it do? Ignore all of this thinking? Maybe it comes to
some conclusion about what it *should* do that is based on abstract
criteria that have nothing to do with its current goal system.

All of the above is not anthropomorphism, just model building inside an
intelligent mechanism. There are no intentional terms.

What is crucial is that in a few moments, the AGI will have changed (or
maybe not changed) its goal system, and that change will have been
governed, not by the state of the goal system right now, but by the
"content" of its current thinking about the world.

A system in which *representational content* has acquired the ability to
feed back to *mechanism* in the way I have just described, is one sense
of Complex.

Now, demonstrate in some formal way that the goal system's structure,
when the AGI has finished this little thought episode, is a predictable
consequence of the current goal system. Demonstrate that the goal system
cannot go into an arbitrary state in a few minutes.

I need a rigorous demonstration that its post-thinking state is
predictable, not vague assertions that the above argument does not give
any reason to suppose the system would deviate from its goal system's
constraints. Somebody step up to the plate and prove it.

[end of part 1]



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT