RE: Complexity of AGI

From: Ben Goertzel (ben@goertzel.org)
Date: Sun May 19 2002 - 19:01:22 MDT


> Well, yeah, cuz you think that all the higher levels of
> organization emerge
> automatically from the low-level behaviors.

"Automatically" when the low-level behaviors are designed and tuned
correctly, with emergence in mind.

Whether that's "automatic" or not is a borderline semantic case, I guess

>The usual, in other
> words.

There is no "usual" in philosophy of mind, and my ideas certainly are not
perceived as "usual" by the AI or cog sci mainstream.

I could as well call your ideas "usual" because of their strong reliance on
the study of perception in the human brain. (Many researchers have spoken
out against the preponderance of theories of mind that are overly
conceptually founded on human visual perception. Your could be seen as
fitting into this trend.)

Both of our theories, and all others I know of, have some similarities and
some differences with other theories previously proposed.

> You seem to be skipping over
> all the issues that I think constitute the real, critical, hard
> parts of the
> problem.

Can you give a short list of these?

> An AI is not a human. I think that an AI design would start out
> by working
> very inefficiently with a small amount of tuning.

Well, sure ... but it all comes down to *how* inefficient "very" is...

> After that
> would come the
> task of getting the AI to undertake more and more complex "tuning", a task
> which is one of the earliest forms of seed AI.

My feeling is that automated parameter tuning by optimization methods,
rather than by reflexion, is going to be necessary for any complex AI
design. This is from experience not only with Novamente... but from the sum
total experience of all computer science...

> Because a correct seed AI design is designed to create and store
> complexity. It has other places to store complexity aside from a global
> surface of 1500 parameters.

It's fine that you like the number 1500, but I don't want you to spread the
meme that any software program I've been involved with has required global
optimization over a 1500-dimensional parameter space. I already corrected
this error in my last response to you.

Of course any AGI design is designed to create and store complexity. To me,
this verbiage says ABSOLUTELY NOTHING about the number of parameters
involved and the sensitivity of the system on these parameters. My
experience and my study of CS and neuroscience tells me that systems
*designed to create and store complexity* tend to have many parameters and a
complex dependence on them.

It's true of the brain, and true of all existing complex software systems.
Ever play with Echo, or Swarm, or Tierra, for example? Or a complex
attractor neural net with asymmetric weights? All these things create and
store complexity, and all have complex multidimensional parameter
dependencies. And in all cases, the more complexity is created and stored,
the trickier the parameter interdependencies.

> Efficient? The two most critical performance issues are:
>
> 1) Making the pieces of the system fit together, at all;
>
> 2) Making tuning of the system tractable and manageable for the pieces of
> the system working together.
>
> Working to make the system more efficient is really not the
> point.

No, making the system more efficient IS an important point, separate from
the points you mention.

Even if the pieces of the system fit together and are "tractably tunable" in
terms of the math of their parameter space, it may STILL be true that the
pieces cannot run efficiently on available hardware. Some of the more
easily tunable aspects of Webmind had this property. Sometimes, not always,
making something more efficient on available hardware leads to nastier
parameter-tuning issues.

> I understand that Novamente is a commercial project

It is a project which explicitly has scientific and humanistic as well as
business goals, however, as clearly stated on www.realai.net and in all
previous writings on Novamente or Webmind.

> and hence may put in a
> lot of human effort toward achieving a given level of performance
> at a given
> time, but I really don't think that it's possible to understand seed AI by
> improving the system yourself and tuning the system using genetic
> algorithms.

This is an odd statement. If neither humans nor optimization methods can
solve the problem, who can? God?

> Mostly, though, I feel that a correct AI design may work *better*
> if you set
> the parameters for "forgetting" things to exactly the right value, but it
> will *still work* even if the parameters are set to different values.

I'm afraid this is a very idealistic point of view.

I hope very much, however, that Novamente turns out to work this nicely!

Occasionally I've worked with narrow-AI systems that had this kind of nice
property, but it's not been the rule, in my experience. Furthermore NN's
almost NEVER have this kind of nice property. Unfortunately, it's systems
closer to the symbolic level that tend to be like this...

> I think that Novamente's sensitivity to the
> exact equations
> it uses for inference

This is wrong, it is not at all sensitive to the exact equations it uses for
inference. Where did you get this idea?

In fact, it's the NN-ish aspects of the system (IMportant Updating function
for example, and evolutionary concept formation) that are trickier to tune.

The only part of inference on which Webmind seemed to depend sensitively was
*control parameters for inference control*... not inference rules

> are symptomatic of an AI pathology for
> inference that
> results from insufficient functional complexity for inference. A real AI
> would be able to use rough approximations to Novamente's exact
> equations and
> still work just fine.

yes. Novamente can do this just fine. If you substitute Pei Wang's NARS
rules for our current inference engine the system basically works OK, it's
just a bit dumber. NARS is a crude approximation to our prob. inference
rules, in my view.

However, if you substitute a slightly different equation for the importance
updating function, THEN, you can get really fucked up behavior -- you don't
get "emergent map formation" in any plausible sense

The closer you are to the symbolic level, the less the parameter tuning
problems are. Not that they're unsolvable on the NN level -- we have an OK
IUF now, and the parameter tuning problem for the IUF has been largely
solved by some simple adaptive optimizations. (Not tried yet in Novamente.)

> > THEN, my friend, you will have performed what I would term "One
> Fucking Hell
> > of a Miracle." I don't believe it's possible, although I do consider it
> > possible you can make a decent AI design based on your DGI theory.
>
> Yes, an AI design requires One Fucking Hell of a Miracle. But it's a
> *different* Miracle than the kind you describe. It's not a question of
> solving enormous engineering problems through Incredible Dauntless Efforts
> but of creating designs that are So Impossibly Clever they don't run into
> those enormous engineering problems. I think many of the problems you're
> running into are symptoms of trying to solve the problem too simply.

Well, I look forward to seeing your impossibly clever designs. Nothing in
DGI hints to *me* at anything impossibly clever, but I look forward to
seeing what you have up your sleeve...

> another way of looking at it is that, from my perspective, you're
> making generic algorithms do things that I think should be broken up into
> interdependent internally specialized subsystems.

Examples of these independent internally specialized subsystems would be
useful

> I think one of the reasons you're focused on parameter tuning and
> performance engineering of Novamente is that Novamente is *just barely*
> capable of solving a certain class of engineering problems, because
> Novamente is too simple a design. I think that an improved design would
> just swallow this whole class of problems whole and hence not require an
> enormous amount of parameter tuning and performance engineering to do it.
> Of course, there will then be a new fringe of problems, which you swallow
> not by tuning parameters but by improving the system design so that these
> problems are also "oversolved", swallowed whole. But since you
> believe the
> current Novamente design is already adequate for general intelligence, and
> since the design itself has a flat architecture, that kind of space for
> design improvement is not really open to you. Which is why you focus on
> parameter tuning and performance engineering. That's how I see
> it, anyway.

Frankly, we are NOT focusing primarily on parameter tuning and performance
engineering at this point.

However, we may be in a year or two -- I hope not though.

My hypothesis was that with a system significantly more complex than
Novamente, these issues would become dominant and incredibly difficult. Not
that they are consuming most of our time with Novamente.

> This business of very fragile solutions is a symptom of gnawing at the
> fringes of the problem space,

The brain is very fragile; minor changes in the levels of certain chemicals
cause all kinds of problems.

And, lifting cognitive problems out of familiar domains like the physical
and social world suddenly makes them INCREDIBLY difficult.

The brain is far less fragile than, say, Deep Blue, but it's hardly a
paragon of generality and flexibility and parameter-insensitivity...

> This kind of design is totally foreign to software engineering as
> it stands
> today, which typically is interested in *just one* solution to a problem.
> If you iterate *just one* solution over and over, it creates systems that
> become very fragile as they become large. If you iterate *many possible
> paths to success* over and over - which really is one of those things that
> you can do in an AI design but not a bank's transaction system - then you
> don't get the AI pathology of this incredible fragility.

I don't understand the connection you are drawing between software
engineering and AI design.

In my approach, the two are pretty separate. The AI design is mathematical,
inspired by philosophy. How to engineer a given mathematical design is a
separate problem.

At a very high level the Novamente design is created so as to support
practical software engineering, but the details of the Ai design are in no
way determined by issues of software engineering.

> > But this is the worst example you could have possibly come up
> with! Cyc is
> > very easy to engineer precisely because it makes so many simplifying
> > assumptions.
>
> This is just how I feel about Novamente.

Novamente is NOT very easy to engineer, not at all.

And, it makes a couple orders of magnitude fewer simplifying assumptions
than CYC.

Lenat understands this, I'm not sure why you don't.

You think Novamente STILL makes too many simplifying assumptions, fine.

I think DGI makes some really weird simplifying assumptions, like
simplifying away most of the dynamics of concepts...

I guess we each choose our assumptions.

> > not overly complex ones. AI scientists have
> > VERY often, it seems to me, simplified their theories so they would have
> > theories that could be implemented without excessive
> implementation effort
> > and excessive parameter tuning.
>
> Noooo... AI scientists have often oversimplified their theories
> because (a)
> they made philosophical connections between observed human behaviors and
> simple computational properties based on surface similarities and
> enthusiasm; (b) because they didn't have the knowledge, skill, or
> pessimistic attitude to perceive really complex systems, and
> hence could not
> "move" in the direction of greater complexity when figuring out
> which system
> to design.

Well, I think many many conventional AI people have the knowledge and skill
to "perceive really complex systems" and they have plenty of pessimism!

I think they want to build systems that work and do stuff, quickly, so they
can publish papers and get tenure and promotion. This pushes them to do
simpler stuff than Novamente or CYC or DGI. They don't have patrons like
you do (and nor do I); they won't get tenure for publishing "Staring into
the Singularity", they need to demonstrate results continually to keep
paying the rend.

I think that very very very few traditional Ai people have made the
cognitive errors you ascribe to them, in recent years, although they have
made many other errors.

Your errors were made by the AI community in the 60's and 70's, on the basis
of much less knowledge about AI than is now available. It's easy now to
laugh at the mistakes of AI researchers from a different era, given all
that we know now.

I accuse my academic and industry AI colleagues of unambition but not of the
profound shallow-mindedness that you insinuate. I think most AI researchers
have learned the lesson that AGI is really hard and as a result are now
working on simpler stuff.

And occasionally they make the error of overgeneralizing from their simpler
stuff to bigger issues...

> This is an inescapable problem of seed AI, and one of the ways it becomes
> more tractable is by, for example, localizing parameters.

Of course, one localizes parameters as much as one can. The modular
structure of Novamente helps with this, but there are limits...

> I think that the problems you are now experiencing are AI pathologies
> of parameters that are too global and too simple.

At this point, you have called my work "pathological" so many times that I
am tempted to accuse you of having an "AI psychopathology" which causes you
to believe your design will be the Holy Grail, when you don't even know what
your design is yet!

This conversation reminds me of conversations I had with Webmind Inc.
cofounder Onar Aam. Everything was really simple to him, and he understood
how to make everything work perfectly, until he actually tried to do
something. "It compiled on the Onar machine," was the running joke.

> > Are they the structures described in the DGI philosophy paper that you
> > posted to this list, or something quite different?
>
> Memory. Concept kernel formation. I would say the things from DGI, but I
> would add the proviso that I don't think you understood which
> subsystems DGI
> was asking for.

If I did not understand what subsystems DGI was asking for, can you clarify
in some more comprehensible way?

I read the paper very carefully.

As a side issue, I do not exactly agree with your theory of concepts as
having "kernels". I think some concepts have kernels, but they emerge,
concepts aren't necessarily built around kernels...

> > I sure am eager to see how DGI or *any* AGI system is going to
> avoid this
> > sort of problem.
>
> Deep architectures, experiential learning of local patterned variables
> instead of optimization of global quantitative variables,
> multiple solution
> pathways on multiple levels of organization, carving the system at the
> correct joints.

These words don't help me believe, of course...

Experiential learning of local patterned variables is VERY HARD and requires
a lot of experiential data.

This learning has to be accomplished by some cognitive mechanism... which
may have quantitative parameters ;)

I think that quantitative parameters are easier to tune than patterned
variables, and that both kinds of variables are important in the mind

> > "Deep architecture" is a cosmic-sounding term; would you care
> to venture a
> > definition? I don't really know what you mean, except that
> you're implying
> > that your ideas are deep and mine are shallow.
>
> Hopefully, what I said above fleshes out the definition a bit.

It didn't really. Don't you have a few-sentences definition of "deep
architecture"?

> From my perspective, you're trying to use simple generic
> processes to do things that require the interaction of interdependent
> internally specialized processes.

Such as what processes?

> The part where we disagree is in the question of whether
> evolution carefully
> and exactingly sculpted those higher levels of organization just as it
> sculpted the neural interactions, or whether all higher levels of
> organization emerge automatically as the laws of physics supposedly do (I
> have my doubts).

You are caricaturing my point of view, in spite of my repeated attempts at
clarification.

> I also feel that if you intuit dynamics will emerge, they will
> not emerge.
> If you know what the dynamics are and how they work, you will be able to
> create systems that support them; not otherwise.

Well I have intuitive knowledge of what the dynamics are and how they work
which is what leads me to the intuition that these dynamics will emerge.
Writing down this intuitive knowledge would be a big project in itself!

> I think that the history
> of AI shows that one of the most frequent classes of error is
> hoping that a
> quality emerges when you don't really know exactly how it works.

Not really, very very few classic AI designs have relied significantly on
the concept of "emergence"

> > You seem to have misinterpreted me. I am not talking about
> anything being
> > in principle beyond human capability to comprehend forever.
> Some things ARE
> > (this is guaranteed by the finite brain size of the human species), but
> > that's not the point I'm making.
>
> OK. We have different ideas about what a modern-day AI
> researcher should be
> trying to comprehend. Does that terminology meet with your approval?

Well, I don't know if we have different ideas about what an AI researcher
should be trying to comprehend.

I think we have different intuitive understandings of the same issues,
largely.

> > Eliezer, I think it is rather funny for *you* to accuse *me* of
> flinching
> > away from the prospect of trying to do something!
>
> What on Earth are you talking about here? Where did you get the
> idea that I
> am deliberately holding back on anything? I'd be putting together a
> programming team right now if SIAI had the funding.

What would you have the programming team implement?

-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT