Re: DGI Paper

From: Eliezer S. Yudkowsky (
Date: Sat May 04 2002 - 13:02:40 MDT

Ben Goertzel wrote:
> hi,
> > Recently (I'm not sure if I was doing this during the whole of the DGI
> > paper) I've been trying to restrict "sensory" to information produced by
> > environmental sense organs. However, there are other perceptions than
> > this. Imaginative imagery exists within the same working memory that
> > sensory information flows into, but it's produced by a different source.
> > Abstract imagery might involve tracking "objects", which can be very
> > high-level features of a sensory modality, but can also be features of no
> > sensory modality at all.
> I think this is an acceptable use of the term "perceptual", but only if you
> explicitly articulate that this is how you're using the word, since it
> differs the common usage in cognitive psychology.
> Your usage is part of common English of course. We can say, e.g. "I sense
> you're upset about this," or "I perceived I was going through some kind of
> change" -- abstract sensations & perceptions. These common English usages
> aren't reflected in the way the terms have become specialized within
> cognitive psych, that's all.

Y'know, I didn't understand what you were getting at here, at all, until
Barsalou's "Perceptual Symbol Systems" reminded me that "perception" can
mean very different things depending on your home civilization. My home
civilization is what you've called the "brain science branch of cognitive
science", whereas your home civilization is what you've called the "computer
science branch of cognitive science". Personally I would call these
civilizations "cognitive science" and "computer science", but I'll use your
terms and call them BrainSci and CompSci. Now of course the people in
CompSci would argue that they are using all kinds of inspiration from the
brain; for example, your paper on Hebbian Logic that attempts to show how
neurons can implement logical inference. But if you actually grew up in
BrainSci culture, hearing about how neurons can efficiently implement
logical inference operations on small networks is enough to instantly
identify the speaker as a CompSci conspirator.

The point is that in CompSci culture there is a traditional distinction
between "cognition" and "perception". Enlightened CompSciFolk are the ones
who admit that cognition interacts with perception in some way. On the
BrainSci side of the divide, "perception" has a very different and much more
inclusive meaning, and it is not automatically assumed that "cognition" and
"perception" are modular subsystems or that they use different underlying

The way I'm using "perception" is not exactly standard in BrainSci
civilization, but it is pretty close to standard usage, I think. I realize
that many readers of "Real AI" will hail from CompSci and that if my usage
of "perception" throws up a stumbling block to them, I need to at least
mention the source of the difficulty.

If you still think this is just me, I recommend reading Barsalou's
"Perceptual Symbol Systems" or the opening chapters of Kosslyn's "Image and
Brain". There is a genuine civilizational divide and the bits of BrainSci
culture that leak across to the other side are much more fragmentary than
the CompSci culture realizes.

> These issues are impossible to avoid in getting scientific about the mind.
> For instance, the way we use the word "reason" in Novamente includes things
> that logicians don't all consider "reason" -- very speculative kinds of
> reasoning.

Again, without meaning any offense, this instantly identifies you as a
CompSci speaker. For me, Tversky and Kahneman's decision theory is a
central prototype of what I call "cognitive psychology". For you, formal
logic is a central prototype of what you call "cognitive psychology". By
the standards of CompSci civilization, Novamente uses a dangerous, sexy kind
of logical inference, bordering on Here There Be Dragons territory. By the
standards of BrainSci civilization, Novamente's logical inference mimics a
small, stereotypically logical subset of the kinds of reasoning that people
are known to use.

I realize that you consider Hebbian Logic, emergence and chaos theory, and
Novamente's expanded inference mechanisms to be clear proof of consilient
integration with BrainSci, but from BrainSci culture these things look like
prototypical central cases of CompSci. There is a gap here, it's not just
me, and it's much bigger than you think.

> > Right - but it's associated with the behaviors, in abstract imagery, of
> > other math concepts. That's why you can't discover complex math concepts
> > without knowing simple math concepts; the complex math concepts are
> > abstracted from the abstract imagery for complex behaviors or complex
> > relations of simple math concepts. But there is still imagery. It is not
> > purely conceptual; one is imagining objects that are abstract objects
> > instead of sensory objects, and imagining properties that are abstract
> > properties instead of sensory properties, but there is still
> > imagery there.
> > It can be mapped onto the visual imagery of the blackboard and so on.
> I am not at all sure you are right about this. I think that abstract
> reasoning can sometimes proceed WITHOUT the imagining of specific objects.
> I think what you're describing is just *one among many modes* of abstract
> reasoning. I think sometimes we pass from abstraction to abstraction
> without the introduction of anything that is "imagery" in any familiar
> sense.
> And the mapping onto the visual blackboard may be a very very distortive
> mapping, which does not support the transformations required for accurate
> inference in the given abstract domain. Visual thinking is not suited for
> everything -- e.g. it works well for calculus and relatively poorly for
> abstract algebra (which involves structures whose symmetries are very
> DIFFERENT from those of the experienced 3D world).
> Again, I think if you're right, it can only be by virtue of having a very
> very very general notion of "imagery" which you haven't yet fully
> articulated.

No, what I mean by imagery is working memory in sensory modalities, and
working memory in some nonsensory but still perceptual modalities, such as
our crossmodal number sense, our crossmodal object-property tracker, and
various introspective perceptions. I would argue that most of what you
consider "abstract" thinking is built up from the crossmodal object sense
and introspective perceptions. The key point is that this abstract imagery
is pretty much the same "kind of thing" as sensory imagery; it interacts
freely with sensory imagery as an equal and interacts with the rest of the
mind in pretty much the same way as sensory imagery.

And when I say imagery, I do mean depictive imagery - if you close your eyes
and imagine a cat, then there is actually a cat-shaped activation pattern in
your visual cortex, adjusting for the logarithmic scaling of retinotopic
maps in the visual system. This is supported by convergent evidence from
functional neuroimaging, pathology of visual deficits, single-cell recording
in animal subjects, functional neuroanatomy, and theoretical consilience
with a very well-established theory of the computational function performed
by visual areas. It is not a "Cartesian theatre" which is a prior
unacceptable; it is an established fact of neuroscience. (If I sound a bit
emphatic here, it's because of that audience questioner at the Foresight
Gathering who claimed that all visual imagery was a Cartesian conspiracy; I
hunted him down afterward and gave him a reference to Kosslyn's "Image and

Novamente uses the traditional (CompSci) view of cognition as a process
separate from the depictive imagery of sensory perception, in which thoughts
about cats are represented by propositions that include a cat symbol which
receives a higher activation level. Novamente's entire thought processes
are propositional rather than perceptual. So it's not surprising that the
idea of mental imagery as an entire level of organization may come as a

> > Abstract imagery uses non-depictive, often cross-modal
> > layers that
> > are nonetheless connected by the detector/controller flow to the depictive
> > layers of mental imagery.
> Again, I think this is a very important *kind* of abstract thought, but not
> the only kind.

And I think that the kind of abstract thought I think you're thinking of,
implemented by Novamente using propositions, is implemented using the above
kind of mental imagery. To visualize this, or any real thought, requires
visualizing an immense amount of interlocking mental machinery;
propositional terms may be much easier to visualize but they are not how
thought actually works.

> > Maybe I should put in a paragraph somewhere about "sensory
> > perception" as a
> > special case of "perception".
> Definitely.
> > I think that human concepts don't come from mixing together the internal
> > representations of other concepts.
> I think that some human concepts do, and some don't.
> This is based partly on introspection: it feels to me that many of my
> concepts come that way.

How can you possibly introspect on whether your concepts come about by
mixing internal representations? Sure, many of your new concepts come from
mixing two previous concepts, but how could you possibly tell whether the
mixing occurred by mixing their underlying neural representations?

> Now, you can always maintain that my introspection is inaccurate.
> Of course, your intuition must also be based largely on your own
> introspection (cog sci just doesn't take us this far, yet), which may also
> be inaccurate.

True, but at least my introspection is based on my alleged observation and
extrapolation from events which BrainSci says *should* be open to my

> Certainly the combinatory aspect of cognition that I describe has
> significant neurological support. Edelman's books since Neural Darwinism,
> among other sources, emphasize this aspect of the formation of neural maps.

I haven't read Edelman's books, but are you sure that it really emphasizes
the evolutionary formation of long-term neural structures rather than the
evolution of short-term neural patterns? Evolutionary hypotheses for the
origin of activation patterns are a standard hypothesis, most notably by
William Calvin. I can't ever remember hearing an evolutionary hypothesis
for the creation of new long-term neural structures; the closest thing to
this is the selective die-off of most connections during the brain's initial
self-wiring, which is a case of survival of the stable, not of differential
replication. Given that Novamente's long-term content has the same
propositional representation as its thoughts, I can see how the confusion
might arise; so, are you sure this is really what Edelman said?

> > I think that's an AI idiom
> > which is not
> > reflected in the human mind. Humans may be capable of faceting
> > concepts and
> > putting the facets together in new ways, like "an object that smells like
> > coffee and tastes like chocolate", but this is (I think) taking apart the
> > concepts into kernels, not mixing the kernel representations together.
> Well, Edelman and I disagree with you, he based mostly on his neurological
> theory, I based mostly on my introspective intuition.
> What support do you have for your belief, other than that when you
> introspect you do not feel yourself to be combining concepts in such a way?

Well, it's certainly consistent with the separate neurological areas for
association cortex in superior posterior temporal areas, shape/color/texture
recognition in inferior temporal areas, and depictive imagery represented in
the buffer of visual cortical areas. It's not clear to me how your approach
- if it's not one of those cases where you would simply say "But I don't
think we *should* do it the human way" - would explain the neurological
differentiation here.

Basically, I have a model of how concepts work in which concept formation
inherently requires that certain internally specialized subsystems operate
together in a complex dance - creating new concepts by mixing their
internals together is an intriguing notion but I have no need for that
hypothesis with respect to humans, though it might be well worth trying in
AIs as long as all the complex machinery is still there.

How exactly would neural maps reproduce internally, anyway? It's clear how
activation patterns could do this, but I can't recall hearing offhand of a
postulated mechanism whereby a neural structure can send signals to another
neural area that results in the long-term potentiation of a duplicate of
that neural structure.

> > Now it may perhaps be quite useful to open up concepts and play with their
> > internals! I'm just saying that I don't think humans do it that way and I
> > don't think an AI should start off doing it that way.
> My intuition is quite otherwise. I think that very little creative
> innovation will happen in a mind that does not intercombine concepts by
> "opening them up and playing with their internals."

I think your intuition on this subject derives from Novamente (a) having a
propositional representation of concepts and (b) lacking all the complex
interacting machinery that's necessary to form new concepts without playing
with their internals. In fact, I would say that Novamente's concepts don't
have any internals.

> > > I don't understand why you think a baby AI can't learn to see the Net
> > > incrementally.
> >
> > It doesn't have a tractable fitness landscape. No feature structure to
> > speak of. No way to build up complex concepts from simple concepts.
> I think this is quite wrong, actually. There is an incredible richness, for
> example, in all the financial and biological databases available online.
> Trading the markets is one way to interact with the world online, a mode of
> interaction that incorporates all sorts of interesting data. Chatting with
> biologists (initially in a formal language) about info in bio databases is
> another. I think an AI has got to start with nonlinguistic portions of the
> Net, then move to linguistic portions that are closely tied to the
> nonlinguistic portions it knows (financial news, Gene Ontology gene
> descriptions, etc.).
> I think the implicit fitness landscapes in the financial trading and
> collaborative biodatabase analysis spaces are quite tractable.

I disagree, but I think our very different perspectives on complexity and
simplicity are showing. To me, "financial trading" and "biodatabase
analysis" are utterly separate from "The Net" as an environment; they have
different sensory structures, different behaviors, different invariants,
different regularities, different everything. That you would consider these
subcategories of "The Net" because they are reachable over TCP/IP
connections says to me that we have extremely different ideas of what
experiential learning is about, and especially about the kind of innate
specialized complexity needed for experiential learning in a domain. I
guess if you're trying to learn all possible environments using the same
dynamics, it could make sense to regard financial trading as a part of the
'Net. I just don't think it'll work, that's all.

> > > "Differential functions on [-5,5] whose third derivative is
> > confined to the
> > > interval [0,1]"
> >
> > This isn't much of a concept until it has a name. Let's call a function
> > like this a "dodomorphic fizzbin".
> Well, i disagree -- I have concepts like this all the time with no names.
> Naming such a concept only occurs, for me, when I want to communicate it to
> others, or write it down for myself.

This is again one of those things that threw me completely until I paused
and tried to visualize you visualizing Novamente. What you are calling a
"concept", I would call a "thought". In Novamente, you can take a complex
structure of nodes and links, and treat it as a node; Novamente has the same
representation for concepts and concept structures, where "concepts" are
really just special cases of concept structures with one node. In DGI these
things not only have different representations, they live on different
levels of organization.

> > > then how is this concept LEARNED? I didn't learn this, I just
> > > INVENTED it.
> >
> > You learned it by inventing it. The invention process took place on the
> > deliberative level of organization. The learning process took
> > place on the
> > concept level of organization and it happened after the invention. You
> > created the mental imagery and then attached it to a concept.
> > These things
> > happen one after the other but they are still different cognitive
> > processes
> > taking place on different levels of organization.
> Somehow, you're telling me it's not a "concept" until it's been named??

It's certainly not a full concept that can be used as an element in other
concept structures. How would you invoke it - as an element in a concept
structure, and not just a memory - if you can't name it? Again, not the way
Novamente does it, where you can always point to any propositional structure
whether or not it has a name.

> I don't see why such a concept has to be "attached" to anything to become a
> "real concept", it seems to me like it's a "real concept" as soon as I start
> thinking about it...
> I guess I still don't fully understand your notion of a "concept"

Does it help if I note that I distinguish between "concept" and "concept
structure" and that neither is analogous to Novamente's propositional

> > 1: "I need to get this paper done."
> > 2: "I sure want some ice cream right now."
> > 3: "Section 3 needs a little work on the spelling."
> > 4: "I've already had my quota of calories for the day."
> > 5: "Maybe I should replace 'dodomorphic' with 'anhaxic plorm'."
> > 6: "If I exercised for an extra hour next week, that would make
> > up for it."
> >
> > If these thoughts are all (human idiom) mental sentences in the internal
> > narrative, I wouldn't expect them to be pronounced simultaneously
> > by a deep
> > adult voice and a squeaky child voice. Rather I would expect them to be
> > interlaced, even though {1, 3, 5} relate to one piece of open goal imagery
> > and {2, 4, 6} relates to a different piece of goal imagery. So the
> > deliberative tracks {1, 3, 5} and {2, 4, 6} are simultaneous, but the
> > thoughts 1, 2, 3, 4, 5, 6 occur sequentially.
> This is not how my mind works introspectively. In my mind, 1, 2 and 3
> appear to me to occur simultaneously.
> Now, you can tell me that I'm deluded and they REALLY occur sequentially in
> my mind but I don't know it.
> But I'm not going to believe you unless you have some REALLY HARD
> neurological proof.

And *this* one felt like running up against a brick wall. 1, 2, and 3 occur
simultaneously? What on Earth? At this point I started to wonder
half-seriously whether the placebo effect in cognitive science was powerful
enough to sculpt our minds into completely different architectures through
the conformation of cognition to our respective expectations.

After the first few moments of sheer, blank incomprehension, though, I
remembered that in Novamente there are, in fact, a great many different
propositional structures being created and activated at any given time, and
I figured that you'd automatically mapped 1, 2, and 3 to Novamente's
propositional structures. I am not saying that you can't simultaneously
want to get a paper done, want some ice cream, and notice that section 3
needs work on the spelling. That happens all the time. What I'm saying is
that you cannot simultaneously enunciate the mental sentences 1, 2, and 3.
Forming a concept structure, linearizing it as a mental sentence in
linguistic form, and speaking it internally, is a different and more complex
mental process than background events in mental imagery. An AI may skip the
linguistic translation but will probably still need to distinguish
activating concept structures from background events.

Cognitive events are multiplexed but only one of them gets internally
enunciated as a mental sentence at any given time.

> Pei Wang (one of my chief Webmind collaborators) and I often spent a long
> time arriving at a mutually comprehensible language. Then, once we had, we
> could make 75% of our differences go away. The other 25% we just had to
> chalk up to different intuitions, and move on...

I think the mix here may be more like 25%/75% if not 15%/85%. The reason
Novamente feels so alien to me is that, in my humble opinion, you're doing
everything wrong, and trying to model the emergent qualities of a mind built
using the wrong components and the wrong levels of organization is...
really, really hard. I consider it a warmup for trying to build AI, like
the mental gymnastics it took to model you modeling DGI using your model of
Novamente as a lens. If you're right and I'm not modeling Novamente
correctly, I don't envy you the job of modeling me modeling Novamente using
DGI so that you can figure out where I went wrong.

-- -- -- -- --
Eliezer S. Yudkowsky
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT