Re: Questions about any Would-Be AGI

From: Stephen Reed (
Date: Tue May 21 2002 - 15:38:44 MDT

On Tue, 21 May 2002, Eliezer S. Yudkowsky wrote:

> GOFAI philosophers are famous for insisting that the relations between words
> constitute the whole of any one word's meaning, but as you know, I think
> that the perceptual patterns invoked by words also have something to do with
> it.

Yes, Cyc operates from the assumption that meaning can be expressed
as relationships among concepts, insofar as passing tests of understanding
-- for example question answering.

We do draw a distinction between words and concepts, according to GOFCL
(Good Old Fashioned Computational Linguistics). Words are elements of
natural language and concepts are the elements of the (Cyc) knowledge

I suppose that given a sufficient model (or lots of instances) of what you
call perceptual patterns, I would try to determine a vocabulary to assert
relationships among the objects in the perception. Fuzzy terms could be
used to collapse the (overly precise) features from visual
perceptions. So if the system were shown enough pictures of Apples,
then the assertions might of the form: "Apples are approximately red in
color" "Apples are about 5 cm in diameter". Given a KB containing such
assertions, and a vision system extracting those features, I believe that
Cyc could classify a new picture as an Apple picture or not.

> I also think that being able to reconstruct a scrambled concept
> net, in this way, is a necessary condition for saying that the AI possesses
> knowledge about the concept behind the word, and not just knowledge about
> how the word is likely to be used in proximity to other words.

Yes, we will face a smaller problem of the same sort, when OpenCyc is
positioned as a Semantic Web Server. How does one Cyc understand concepts
created by another peer Cyc? We allow the term names to be created in the
natural language of the author. Likewise comments may be in languages
other than English. Term names and comments are opaque to Cyc's current
reasoning, so our plan is to perform analogical reasoning and
classification to fit new knowledge (semi) automatically into the Cyc
reference ontology. ["Chien" is a small domestic animal that
barks. Maybe it is the same thing as Cyc's reference Dog.]

> I think that from a Cyc's-eye-view, Cyc may know
> something about the grammatical structure of English strings; it knows which
> English strings are synonyms for English strings, and which English strings
> are properties of English strings that describe categories, and which
> English strings denote events that cause events denoted by other English
> strings to occur, and so on, but it doesn't have any idea which events and
> categories the English strings correspond to.

As far as question answering is concerned, I would say there are many
cases in which Cyc's understanding of the concepts denoted by English
words is testable. For example "World War II" denotes the term
c0fd5d2b-9c29-11b1-9dad-c379636f7270 which has the name WorldWarII. Cyc
knows (has the assertion): The Nuremberg Trials starts after the end
of World War II. In CycL: (startsAfterEndingOf NurembergTrials WorldWarII)
Cyc could be asked (in the CycL equivalent to) "Did the Nurenburg Trials
occur before World War II?" and respond "No" and give the justification.

> If I had to pick an incremental direction for Cyc (as opposed to a complete philosophical
> revision), I think it would be focusing on those Cyc concepts that can be
> grounded in the complex data of Cyc's internals - i.e., teaching Cyc
> perceptual concepts for its own internals.

Agreed, as that is a path to Seed AI.


Stephen L. Reed                  phone:  512.342.4036
Cycorp, Suite 100                  fax:  512.342.4040
3721 Executive Center Drive      email:
Austin, TX 78731                   web:
         download OpenCyc at

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT