RE: Designing AGI

From: Ben Goertzel (ben@goertzel.org)
Date: Wed Oct 26 2005 - 06:17:38 MDT


> > But, one can still let the detailed representations evolve and adapt
> > (and emerge ;) within the "environmental constraints" one has explicitly
> > wired in.
>
> Done correctly, that would be the actual valid use of 'emergence' in
> AGI. But you'd never want to use something this fuzzy and subjective
> as a design goal or basic principle. You don't set out saying 'we're
> going to build an AGI using emergence!'; you design an AGI using the
> best mechanisms you can find to achieve specific effects, such that
> they work together to produce all the key aspects of intelligence,
> and then when you finally get to summarising all that complexity for
> the press release then you /might/ be justified in saying 'ah, this
> actually works by letting Xs emerge in the context of Y...'.

Well, "emergence" is a very general term which always conceals all the
particularities of the situation.

For instance, in immunology one can say that the overall immune response
is an emergent phenomenon, because

-- it arises from the "immune network"
of interactions between antibody classes via "self-organization" rather
than via the imposition of some sort of top-down structure.

-- it is not obvious from the properties of the individual antibody
classes taken in isolation

However, this doesn't mean that one could design an effective immune
system merely by taking a random bunch of antibody-like thingies and
throwing them together and hoping the right networks will come out
of it. In fact some Los Alamos and SFI folks have played with this
approach (Alan Perelson, Rob deBoer for instance) and have managed
to replicate some of the overall properties of real immune networks,
but not others. A lot of the properties of real immune networks
seem to come down to T-cells which are not understood all that well
and tend not to be adequately accounted for in the LANL/SFI
computer simulations (last I checked).

So, yeah, there is always a detailed story underlying any observed
instance of "emergence", and if one wants to engineer a system with
certain emergent behaviors, one needs to understand specifically
WHY the particular type of emergent behavior one wants is likely
to emerge from the particular system one is building. One doesn't
need to understand in detail how each particular instance of
emergent behavior will be produced, but probably one has to
understand things well enough to explain in detail (if
laboriously) how SOME particular examples of the emergent
behavior will be produced.

For instance, in the immune network example, I would expect that
in the case of an adequate immune network simulation, there would
be a conceptual and mathematical explanation of why a particular
choice of computational T-cell/B-cell model is expected to give
rise to particular aspects of real immune response.

Getting back to AI, in Novamente one aspect of knowledge
representation involves what we call "maps" -- fuzzy sets of
nodes/links that tend to be activated together. These maps
are not programmed explicitly -- they emerge. But the capability
for such maps to emerge is not there by accident, it's there
by design. And I could explain for you in detail the processes
by which an example map (say the map for the concept "block"
meaning blocks observed in the AGI-SIM simulation world) would
arise. But doing such a detailed explanation for any one concept
is a LOT of work so it wouldn't be feasible to do that for every
map in the system.

What gives map formation the flavor of "emergence" is that there
is no MapFormation MindAgent (to lapse into Novamente lingo), i.e.
no block of code that explicitly produces maps. Rather, map
formation occurs as an indirect consequence of the "localized"
activity of a bunch of processes that act on individual nodes and
links, that then coordinate together to cause maps to "self-
organize."

> > I didn't say "complex systems theory" btw I said "complex systems
> > thinking." This is because there isn't really any general "complex
> > systems theory" out there.... there are theories of particular
> > classes of complex systems, and then there is mathematical theory
> > relevant to complex systems (e.g. dynamical systems theory) and
> > then there is complex systems *philosophy* (which I do find useful
> > for guiding my thinking in many cases)
>
> Ok, important distinction, though at this point I'd say that
> understanding 'complex systems philosophy' is primarily useful in
> terms of recognising and avoiding unwanted dynamics and failure
> modes in AGI/FAI.

The concepts of "attractor" and "attractor basin" have been useful
in Novamente design. Maps as alluded to above are roughly speaking
"lobes" in strange attractors of Novamente dynamics.

-- ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT