RE: Seed AI milestones (was: Microsoft aflare)

From: Ben Goertzel (ben@goertzel.org)
Date: Wed Feb 27 2002 - 08:51:03 MST


hi eugene etc.,

> I'm of course disagreeing that general intelligence is codable by a team
> of human programmers due to the usual complexity bareer,

I have never understood your "complexity barrier" argument as anything but
an intuition.

You're quite welcome to your pessimistic intuition on this point, of course.

> but for the sake
> of argument: let's assume it's feasible. It seems that introspection and
> self manipulation requires a very specific knowledge base -- compare, for
> instance a human hacking human brain morphology box, or fiddling with the
> genome for the ion channels expressed in the CNS. This is nothing like
> navigating during a pub crawl, or building a robot. Very different
> knowledge base.

I agree that goal-directed self-modification is a specialized mental
function, similar (very roughly speaking) to, say, vision processing, or
mathematical reasoning, or social interaction. However, also like these
other things, it will be achieved by a combination of general intelligence
processes with more specialized heuristics.

> > It may also use various functions and applications of high-level
> > intelligence as low level glue, which is an application closed to
> > humans, but that doesn't necessarily imply robust modification of the
> > low-level code base; it need only imply robust modification of any of
> > the cognitive structures that would ordinarily be modified by a
> > brainware system.
>
> So you're modifying internal code, and are dumping that into low level
> compilable result? You could cloak brittleness that way (assuming, you
> design for it), but you'd lose efficiency, and hence utilize the hardware
> (which is not much to start with, even a decade from now) very badly,
> losing orders of magnitude of performance that way.

I think you are wrong about losing "orders of magnitude" of performance.
If you have any detailed calculations to back up this estimate, please
share them.

My own experience, based on prototyping in this space for a while, is that
you will lose about order of magnitude of performance by doing self-
modification in a *properly optimized* high-level language rather than in a
low-level language like C++.

Our first self-modification experiments in Novamente (our new AI system,
the Webmind successor) will not involve Novamente rewriting its C++ source,
but rather Novamente rewriting what we call "schema" that control its
cognitive functioning (which are equivalent to programs in our own
high-level
language, that we call Sasha (named after our departed collaborator Sasha
Chislenko)).

In the first-draft Novamente schema module, executing a schema
will about 2 orders of mag. slower
than executing an analogous C++ program, but this is because the first-draft
schema module will not embody sophisticated schema optimization procedures.
We have a fairly detailed design for a second-version schema module that
we believe will narrow the performance gap to within 1 order of magnitude.

Why accept a 1 order of magnitude slowdown? Because we are *confronting*
the complexity barrier you mention rather than hiding in fear from it.
Novamente is very complex, both in its design and in its emergent behaviors,
but we are working to keep it manageably complex. In our judgment, having
the system modify schema (Sasha programs) rather than C++ source is a big
help
in keeping the complexity manageable. And this added manageability is more
than
worth an order of magnitude slowdown.

Eli and I are fashioning solutions (or trying!!) whereas you are pointing
out
potential problems. There is nothing wrong with pointing out problems;
however,
it is a fact that both the nature of the problems and the potential
workarounds that
exist become far clearer once one starts working at solving the problems,
rather than
just talking about them.

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT