RE: DGI Paper

From: Ben Goertzel (ben@goertzel.org)
Date: Sun May 05 2002 - 18:19:13 MDT


> But if you actually grew up in
> BrainSci culture, hearing about how neurons can efficiently implement
> logical inference operations on small networks is enough to instantly
> identify the speaker as a CompSci conspirator.

I agree that my background is more CS than neuroscience.

However, Hebbian Logic is explicitly about how neural nets can
*inefficiently* implement logical inference operations on *large neural
networks*. I think my paper makes that quite clear.

In that sense, it is different from most CS work on neural nets. And I
definitely read a lot of neuroscience papers while writing it, so I don't
think it is totally naive in brain science terms.

> I recommend reading Barsalou's
> "Perceptual Symbol Systems" or the opening chapters of Kosslyn's
> "Image and
> Brain". There is a genuine civilizational divide and the bits of BrainSci
> culture that leak across to the other side are much more fragmentary than
> the CompSci culture realizes.

Well, I read Kosslyn's book and talked to him about it once, and we seemed
to have no trouble communicating!

> Again, without meaning any offense, this instantly identifies you as a
> CompSci speaker. For me, Tversky and Kahneman's decision theory is a
> central prototype of what I call "cognitive psychology". For you, formal
> logic is a central prototype of what you call "cognitive psychology".

No, I'm sorry, but this is a completely false statement!! And an odd one
too. Where could you have gotten the idea that I don't know what cognitive
psychology is ???

I was a research fellow in a psych department for 2.5 years and I certainly
know what cognitive psychology is.

In fact, I know a lot more cog psych. than I do brain science (which is
partly because there's a lot less cog psych. to know).

Obviously, formal logic is not cognitive psychology, it's mathematics,
verging on CS in some of its subdisciplines.

Cognitive psychology includes the study of how *humans* carry out logical
operations, and there is some work trying to formally model this, but this
is different from math or CS work on formal logic.

Of course, Tversky's work is cognitive psychology, and formal mathematical
logic is not.

I actually have a fair bit of research experience in cog psych, and a little
bit in analyzing data obtained from brain science. So although my formal
training is in math/Cs, I have more practical experience with these other
disciplines than you seem to assume.

When my student Takuo Henmi did his PhD on nonlinear-dynamical models of
psychophysics, a couple years back, this was perceptual psych verging on cog
psych. It was very different from CS or mathematical work. What we were
doing was actually collecting data about human visual perception, in the UWA
vision lab, and creating mathematical models that tried to explain the data.
I am rather well aware of the difference between this kind of work and the
math/CS work I've done.

When my student Graham Zemunik (he finished his PhD under someone else,
after I left UWA) wrote his PhD on a computational model of the cockroach
brain, he was really doing biological modeling not cognitive psych. Half of
the work he had to do was sorting through various contradictory biology
papers to try to get a coherent view of what is known biologically about
cockroach information processing. Very different from CS, even though the
mathematical model he created was embodied in a computer simulation.

Frankly I found cognitive psych rather frustrating because it takes so
incredibly much art, science and patience to design lab experiments that
will tell you anything of even moderately much interest about the mind.
Mind dynamics, my main interest, can't really be studied using the
experimental methods of cog psych. today.

This year I have worked a little bit with my friend Barak Pearlmutter, who
does the data analysis work for the MEG lab here. They generate nice
122-dimensional time series data describing the magnetic fields at various
points on the brain surface. This work is at the borderline of CS, math,
cog psych and brain science -- it's pretty interesting. But there's a long
way to go, still. Basically they're just dealing with issues like
localizing which part of the brain is most active when different types of
simple percepts or actions are present. Inferring a model of brain dynamics
from this data is a hard problem, and a problem we may apply Novamente to.

> By the
> standards of BrainSci civilization, Novamente's logical inference mimics a
> small, stereotypically logical subset of the kinds of reasoning
> that people
> are known to use.

This is a bad misunderstanding of Novamente's inference engine, but I'm not
going to dig into it in this e-mail

If you would like to pursue this topic seriously, please give me a list of
"kinds of reasoning that people are known to use" that you believe
Novamente, when complete, will be incapable of doing. I will then explain
to you how I believe a completed Novamente will handle these kinds of
reasoning.

Without this kind of detail, it is impossible to respond sensibly to your
statement.

> I realize that you consider Hebbian Logic, emergence and chaos theory, and
> Novamente's expanded inference mechanisms to be clear proof of consilient
> integration with BrainSci, but from BrainSci culture these things
> look like
> prototypical central cases of CompSci. There is a gap here, it's not just
> me, and it's much bigger than you think.

I do not think that I have *clear proof* of Novamente's conceptual
compatibility with brain science.

And I do not think that you have *clear proof* of its incompatibility
either.

I think that we do not understand enough about the brain to formulate clear
proof of theses such as this, in one direction or another.

> I would argue that most of what you
> consider "abstract" thinking is built up from the crossmodal object sense
> and introspective perceptions. The key point is that this
> abstract imagery
> is pretty much the same "kind of thing" as sensory imagery; it interacts
> freely with sensory imagery as an equal and interacts with the rest of the
> mind in pretty much the same way as sensory imagery.

yeah, this is a difference of intuition between us. I doubt this very much.

This is not how I introspectively feel like I'm thinking, and so far as I
know the cognitive psych literature does not support this strong claim
either.

To me, introspectively, my "abstract imagery" is not at all the same kind of
thing as sensory imagery, and does not interface with the rest of my mind
in similar ways.

I could of course be convinced that my introspective view of my own thought
process is an illusion. But it would take some fairly solid scientific
evidence, and you are not offering any.

> And when I say imagery, I do mean depictive imagery - if you
> close your eyes
> and imagine a cat, then there is actually a cat-shaped activation
> pattern in
> your visual cortex,

yeah, that's fine, but what about when I close my eyes and imagine an alien
from a nondimensional universe, playing with a pool full of differential
operators that act on a space of melodies played at a pitch too high for a
human ear to hear.

what about when i close my eyes and imagine that i'm a butterfly dreaming
that I'm a very small uncomputable number that's dreaming that I'm a moth?

Something rather different than sensory-style imagery seems to be going on
in my mind in these cases. Sensory-style images are one ingredient, but
there seems to be a very different kind of form-construction and
form-manipulation going on too.

> Novamente uses the traditional (CompSci) view of cognition as a process
> separate from the depictive imagery of sensory perception, in
> which thoughts
> about cats are represented by propositions that include a cat symbol which
> receives a higher activation level. Novamente's entire thought processes
> are propositional rather than perceptual. So it's not surprising that the
> idea of mental imagery as an entire level of organization may come as a
> shock.

Again, you are badly and profoundly misunderstanding Novamente. Since the
book I gave you to read is crudely written at this point, I don't blame you
for misunderstanding things on first read. But it is frustrating that you
persist to hold to the same misconceptions even after I specifically attempt
to clarify them. I guess explaining this stuff clearly is just VERY, VERY
HARD; I wish I were better at it.

It is TRUE that Novamente's "brain" (its space of nodes & links) lacks a 3D
structure, so that (assuming it has a camera eye), when it sees a cat, it
will not have a 3D picture of a cat in its brain. Computer RAM is not made
that way, and Novamente does not try to simulate a 3D brain structure in the
linear memory of a computer.

However, it IS true that when a camera-endowed Novamente sees a cat, the
result is a huge complex of dynamic node-link activations and node-and-link
interactions.... Will there be a "cat" node somewhere in the system that is
activated whenever cats are seen? Maybe. or maybe there will just be a
complex of nodes that are stimulated together whenever cats are seen. Or
maybe there will be several different complexes that are stimulated when
several different types of cats are seen, with very loose inter-complex
couplings.

The dichotomy of "propositional versus perceptual" is in my view a false
one. To me, "propositional" is just a point of view one can take about
anything at all. For instance, one can formulate a neural network system as
a logical production system if one wants to, where the logical propositions
(rules) encode the dynamics of the NN. In fact, Novamente "distributed
schema" are very much like little NN's. The logical reasoning component of
the system may view them propositionally, but considered in themselves, they
are acting pretty much just like neural nets.

The idea of mental imagery as a level of organization is not problematic; I
think that this kind of "border zone" between perception and cognition does
exist and is important. What I reject is the notion that this is the
majority of cognition.

> And I think that the kind of abstract thought I think you're thinking of,
> implemented by Novamente using propositions, is implemented using
> the above
> kind of mental imagery.

In my view, abstract thought is only partly implemented using mental
imagery. There's a lot more to the story.

> I haven't read Edelman's books, but are you sure that it really emphasizes
> the evolutionary formation of long-term neural structures rather than the
> evolution of short-term neural patterns?

yes, I am 100% sure of this.

> Basically, I have a model of how concepts work in which concept formation
> inherently requires that certain internally specialized subsystems operate
> together in a complex dance - creating new concepts by mixing their
> internals together is an intriguing notion but I have no need for that
> hypothesis with respect to humans, though it might be well worth trying in
> AIs as long as all the complex machinery is still there.

You may have no need for this hypothesis, but you are also very, very far
from explaining the details of advanced human cognition using your own
theory.

> How exactly would neural maps reproduce internally, anyway? It's
> clear how
> activation patterns could do this, but I can't recall hearing offhand of a
> postulated mechanism whereby a neural structure can send signals
> to another
> neural area that results in the long-term potentiation of a duplicate of
> that neural structure.

There is lots of evidence for this sort of phenomenon happening, but I don't
have the references at hand right now.
I will try to dig them up.

> I think your intuition on this subject derives from Novamente (a) having a
> propositional representation of concepts and (b) lacking all the complex
> interacting machinery that's necessary to form new concepts
> without playing
> with their internals.

Of course, this reverses reality. It was my intuitions about the mind that
led me to the initial Novamente design, not the Novamente design that led me
to my intuitions about the mind.

> In fact, I would say that Novamente's
> concepts don't
> have any internals.

It is hard for me to understand what you could possibly mean by this. A
Novamente concept, inasmuch as it's represented by a "map" of nodes/links
that tend to be simultaneously active, obviously has internals: the
nodes/links that constitute the map.

> I disagree, but I think our very different perspectives on complexity and
> simplicity are showing. To me, "financial trading" and "biodatabase
> analysis" are utterly separate from "The Net" as an environment; they have
> different sensory structures, different behaviors, different invariants,
> different regularities, different everything.

When you get down to the nitty-gritty, there are plenty of similarities
between financial and bio databases. And these databases connect to text
databases that are relevant to their quantitative contents. And these text
databases relate to a lot of the text on the world's Web pages....

> we have extremely different ideas of what
> experiential learning is about,

To me, in a nutshell, it's about learning to perceive/act-in/cognize a
portion of the world cooperatively with (and ultimately in communication
with) other minds that are perceiving/acting-in/cognizing the same portion
of the world.

> This is again one of those things that threw me completely until I paused
> and tried to visualize you visualizing Novamente. What you are calling a
> "concept", I would call a "thought".

Not at all true.

I would call "cat" a concept and so would you. I would call "Eliezer" a
concept and so would you.

It's OK to use the word "thought" for an ephemeral pattern arising in the
mind, but once a thought is remembered and brought back from memory and
mulled over and over again, doesn't it become a "concept"?

> It's certainly not a full concept that can be used as an element in other
> concept structures. How would you invoke it - as an element in a concept
> structure, and not just a memory - if you can't name it?

I personally, in my own mind, can invoke concepts WITHOUT NAMING THEM.

To take a nonmathematical example, I do this when composing and improvising
music ALL THE TIME. I invoke concepts about ways of playing different
chords or note sequences, and I have named none of these concepts. Naming
them meaningfully would be very very hard.

> > I guess I still don't fully understand your notion of a "concept"
>
> Does it help if I note that I distinguish between "concept" and "concept
> structure" and that neither is analogous to Novamente's propositional
> structures?

I don't understand your definition of "concept structure"

> And *this* one felt like running up against a brick wall. 1, 2,
> and 3 occur
> simultaneously? What on Earth? At this point I started to wonder
> half-seriously whether the placebo effect in cognitive science
> was powerful
> enough to sculpt our minds into completely different architectures through
> the conformation of cognition to our respective expectations.

And this idea would occur to you BEFORE the idea that different human minds
actually work somewhat differently?

> I think the mix here may be more like 25%/75% if not 15%/85%. The reason
> Novamente feels so alien to me is that, in my humble opinion, you're doing
> everything wrong, and trying to model the emergent qualities of a
> mind built
> using the wrong components and the wrong levels of organization is...
> really, really hard.

I would respect your opinion more if you had personally taken on the
challenge of designing a "real AI" system. I understand you intend to do
this sometime in the future. I suspect that once you have done so, we will
be able to have much more productive conversations. I think it will be
easier to map various Novamente ideas into aspects of your detailed AI
design, than it is to map them into aspects of your abstract theory.

> I consider it a warmup for trying to build AI, like
> the mental gymnastics it took to model you modeling DGI using
> your model of
> Novamente as a lens. If you're right and I'm not modeling Novamente
> correctly, I don't envy you the job of modeling me modeling
> Novamente using
> DGI so that you can figure out where I went wrong.

Eliezer, believe it or not, modeling your personal internal model of
Novamente is not something I'm very interested in doing.

It's clear that your model of Novamente is very badly wrong, but my approach
to remedying this would be to try to explain Novamente more clearly *in
general*, rather than to try to figure out exactly what the roots of your
personal misunderstandings are.

In 6 months or so we'll have a much better version of the book, but I don't
know if this will help you. It seems to me that you have a strong emotional
reaction to some particular aspects of the design, which to some extent
blinds you to understanding the design as a whole.

Anyway, having you or any other individual think Novamente is a good design,
is not that important to me. Different people are gonna have different
tastes.

I do appreciate your feedback, because it has helped me to see which aspects
of the book need the most work. Your feedback on the AI design itself has
frankly not been very helpful, because you simply reject the initial premise
of the design, which means that you have nothing useful to say about the
details.

Perhaps you're right and it's not possible to craft a "middle way" between
symbolic & subsymbolic AI, as Novamente attempts to do. But I am nowhere
near convinced by your arguments, which include a huge number of badly
incorrect statements about Novamente, as well as a large number of
statements of your intuitive feelings framed as if they were definitive
facts.

-- Ben G

-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT