Cyc + Albus + Novamente = cheap and simple murder of third world citizens [was RE: Questions about any Would-Be AGI]

From: Ben Goertzel (
Date: Tue May 21 2002 - 17:29:48 MDT

> Yes, Cyc operates from the assumption that meaning can be expressed
> as relationships among concepts, insofar as passing tests of understanding
> -- for example question answering.

I think this is true *in principle* -- but that as a matter of practice, it
is much more efficient to represent meaning in terms of relations among
*concepts plus remembered percepts and actions*.

The key difference between concepts and percepts/actions, as I intend it in
this context, is that percepts and actions are only VERY AWKWARDLY AND
INEFFICIENTLY expressible in terms of logical propositions, and are much
more naturally expressed in terms like huge arrays of numbers (percepts) or
complex executable programs (actions).

> I suppose that given a sufficient model (or lots of instances) of what you
> call perceptual patterns, I would try to determine a vocabulary to assert
> relationships among the objects in the perception. Fuzzy terms could be
> used to collapse the (overly precise) features from visual
> perceptions. So if the system were shown enough pictures of Apples,
> then the assertions might of the form: "Apples are approximately red in
> color" "Apples are about 5 cm in diameter". Given a KB containing such
> assertions, and a vision system extracting those features, I believe that
> Cyc could classify a new picture as an Apple picture or not.

yes, but I would assert that if you have as set of remembered perceptions
and actions of size N, the space cost of representing this set thoroughly as
a collection of conceptual propositions is going to be superexponential in

By keeping an array of numerical data as a record of a perceived scene, one
is keeping a "compressed version" of the huge number of propositions that
are patterns in this data.

Of course, one can trivially represent a table of perceived data in terms of
propositions like

        "Element at row i, column j of table T is 555.4444"

but this isn't really conceptual in the ordinary sense, it's just a mapping
of percepts into conceptual language.

In Novamente we keep percepts and actions in two forms

Percepts are kept in a special repository of number tables, and actions are
kept in a special format called "compound schema."

But these number tables can then be extracted into propositional form, and
these compound schema can be expanded into distributed schema that are fully

I feel this kind of dual representation is necessary for pragmatic purposes

> As far as question answering is concerned, I would say there are many
> cases in which Cyc's understanding of the concepts denoted by English
> words is testable. For example "World War II" denotes the term
> c0fd5d2b-9c29-11b1-9dad-c379636f7270 which has the name WorldWarII. Cyc
> knows (has the assertion): The Nuremberg Trials starts after the end
> of World War II. In CycL: (startsAfterEndingOf NurembergTrials WorldWarII)
> Cyc could be asked (in the CycL equivalent to) "Did the Nurenburg Trials
> occur before World War II?" and respond "No" and give the justification.

Drawing shallow deductive conclusions from book-learning, in this manner, is
definitely going to be relatively easy for Cyc. But drawing
useful/meaningful analogies between significantly different situations that
it "knows" about is going to be hard for it. Because analogy is driven by
the creation of new concepts and by abstract contextual control patterns,
both of which are derived from analysis of large masses of perceptual and
action data that are not conveniently representable in propositional form.

> > If I had to pick an incremental direction for Cyc (as opposed
> to a complete philosophical
> > revision), I think it would be focusing on those Cyc concepts
> that can be
> > grounded in the complex data of Cyc's internals - i.e., teaching Cyc
> > perceptual concepts for its own internals.
> Agreed, as that is a path to Seed AI.

I disagree very strongly with this intuition.

I think it would be a lot better for the Cyc project to focus on some
particular perception-action domain that is richer and simpler than Cyc's

For instance, the military does vast amounts of work on computer vision and
associated robotic action. If I were directing the Cyc project, I would try
to use Cyc in the highest level of a hierarchical-architecture vision
processing / robotic control system. This would be a very nice way to get
Cyc started grounding some of its concepts in reality. for instance, its
concept of "tree" would be grounded in the visual patterns seen by a camera
eye when it looks like a tree. Its concept of "fast" would be grounded in
what happened when its robot body was moving fast.

For intance, interfacing Cyc with James R. Albus's (NIST) work using
intelligent hierarchically/heterarchically-structured control systems to
guide experimental tanks would be extremely interesting. Look up Albus's
work if you're not familiar with it, it is truly awesome. If you don't know
him and are potentially interested in this connection, let me know; I know
Albus, though we're not extremely close.

Personally I don't like the idea of military funding for AI, but since
you're already living off military funding, collaborating with someone like
Albus doing sensory processing for the military couldn't hurt.

I mention Albus because I think he has a VERY VERY GOOD architecture for the
perception/action levels of the mind, applied practically in an extremely
effective way. (The video of his automatically-controlled tank cruising
around is pretty nifty, I'm not sure if it's online or not, I saw it as a
conference recently.)

Of course, Cyc and Albus's architecture don't glue together all that
naturally. To connect them directly would require some real hacking.
Novamente with its more flexible architecture of course could provide a kind
of "glue" here. Using Albus's stuff, Cyc, and Novamente, we could make a
really awesome intelligent tank that would be incomparably efficient at
mowing down third world citizens without endangering precious US lives. Now
there's a truly inspiring thought, huh?? ;-p

My intuition is that cyc's reasoning engine will prove badly inadequate for
dealing with concepts that are grounded in perceptual and motor data such as
that which comes through a tank.... One will need a much more robust
reasoning system like... umm, Novamente's ;->

-- Ben G

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT