Re: Seed AI milestones (was: Microsoft aflare)

From: Samantha Atkins (samantha@objectent.com)
Date: Wed Feb 27 2002 - 13:57:14 MST


Eugene Leitl wrote:

>
> There's a bareer to the complexity of a system you can build as a single
> person. Different persons have different ceilings, mine is quite low.
> Teams do not really scale in that regard. A ceiling of a group is not
> dramatically higher than of a single individual, and the ceiling of a
> large group can be actually lower. This is basic software engineering
> knowledge.

Fortunately, there are a few design techniques that allow
individuals and groups to do better than this. In particular if
a complex problem can be reasonably broken into subsystems each
of which is below the complexity barrier and if the interactions
of the subsystems is below the complexity barrier then an
aggregate system of greater complexity can be acheived. It does
seem that the both processes are quite dependent on who does the
division into subsystems and how well that person or those
persons sees the interactions of the subsystems. In practice
really good systems require a really good system architect and
perhaps some good subsystem architects. It also helps if there
are some reuse and simplification people/agents involved who
notice recurring patterns and connections to other projects that
may simplify or improve the project or suggest future
"infrastructure" projects giving more leverage to the person or
group.

What is difficult is to grow complexity without increasingly
burdensome manual procedures (meeting and documentation blight).
  Some level of automation of high order could be very helpful
here. A few things have been tried to date with fairly limited,
although helpful, results.

>
> General intelligence is not property of a simple system. Far from it.
> As a result I predict that human software engineers coding an AI
> explicitly (i.e. using not stochastic/noisy/evolutionary methods) are
> going to fall short of the goal.
>

So we do a seed.

 
>
>>I agree that goal-directed self-modification is a specialized mental
>>function, similar (very roughly speaking) to, say, vision processing, or
>>mathematical reasoning, or social interaction. However, also like these
>>other things, it will be achieved by a combination of general
>>intelligence processes with more specialized heuristics.
>>
>
> Am I correct to assume that we're talking about explicit codification of
> knowledge destilled from human experts? Is there any reason to suspect
> that we're going to do any better than Lenat & Co? The record track so far
> is not overwhelming.
>

Well, there is also the problem that goal-directed modification
of software is not very well developed even among human experts
to date. We can optimize some small-scale things rather well.
We have some rules of thumb and ways of improving some rather
formal software specifications and tests for certain types of
goodness but not a great deal else. A really good human
software designer tweaks a system not just by formal rules but
by some much softer and even artistic considerations also. And
the limits of such improvements as well as the cost are fairly
well appreciated. Actually though, most software can be
improved by an order of magnitude (across a range of desirable
traits, not just performance) without much specialized knowledge
at all. It usually isn't thus improved for reasons like the
original software being a known quantity in terms of
maintenance/bug fix/performance and unwillingness to take a
chance (however small) on breakage, or more to the point, on
proving that breakage did not occur.

>
>>Our first self-modification experiments in Novamente (our new AI system,
>>the Webmind successor) will not involve Novamente rewriting its C++
>>source, but rather Novamente rewriting what we call "schema" that
>>control its cognitive functioning (which are equivalent to programs in
>>our own high-level language, that we call Sasha (named after our
>>departed collaborator Sasha Chislenko)).
>>
>
> You will let us know, how well self modification will do, will you? This
> is a genuinely interesting experiment.
>
>
>>In the first-draft Novamente schema module, executing a schema will
>>about 2 orders of mag. slower than executing an analogous C++ program,
>>but this is because the first-draft schema module will not embody
>>sophisticated schema optimization procedures. We have a fairly detailed
>>design for a second-version schema module that we believe will narrow
>>the performance gap to within 1 order of magnitude.
>>
>>Why accept a 1 order of magnitude slowdown? Because we are
>>*confronting* the complexity barrier you mention rather than hiding in
>>fear from it. Novamente is very complex, both in its design and in its
>>
>
> Bootstrap requires *more* resources, not less of it. Nonchalance about
> losing touch with bare metal in bootstrap design phase sounds very wrong
> to me.
>

But the final optimization to bare metal or close to it is very
different than higher language and conceptual optimizations. It
is also a more general job than a particular project type like
AI. It is important to properly divide effort. Humans and
possibly some layers of a self-modifying AI need to work with
clean, powerful conceptual tools and procedures that give wings
to thought, if you will, and ease understanding and maintenance.
At a very differently level of optimization that is highly
automated and can arguably only be done in bulk by computer,
automization closer to the metal while preserving important (how
to mark these?) invariants is done. Both large scale
inefficient but expressive work and eventual encoding into
highly efficient executables is required. But they should not
be confused or conflated. Doing so has killed many a project.

 
>
>>emergent behaviors, but we are working to keep it manageably complex.
>>In our judgment, having the system modify schema (Sasha programs) rather
>>than C++ source is a big help in keeping the complexity manageable.
>>And this added manageability is more than worth an order of magnitude
>>slowdown.
>>

YES. The slowdown can be factored out at least in part at a
later stage of the process.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT