From: Ben Goertzel (ben@goertzel.org)
Date: Tue Sep 13 2005 - 10:19:42 MDT
Richard wrote:
> The (Neural Net) symbol engine generates these distributed patterns that
> correspond to symbols. The logic engine uses these to reason with. Now
> imagine that the logic engine does something (I am not sure what) to
> cause there to be a need for a new symbol. This would be difficult or
> impossible, because there is no way for you to impose a new symbol on
> the symbol engine; the symbols emerge, so to create a new one you have
> to set up the right pattern of connections across a big chunk of
> network, you can't just write another symbol to memory the way you would
> in a conventional system. The logic engine doesn't know about neural
> signals, only high level symbols.
>
> This question hinges on my suggestion that a logic engine would somehow
> need to create or otherwise modify the symbols themselves. So tell me
> folks: can we guarantee that the symbol engine can get along without
> ever touching any symbols? You know more about this than I do. Is
> there going to be a firewall between the logic engine and whatever
> creates and maintains symbols? You can look but you can't touch, so to
> speak? This all speaks to the question of what exactly such a built-in
> logic engine would be for, exactly?
>
> I could stand to be enlightened on this point. In my world, I wouldn't
> try to connect them, so I have not yet considered the problem.
You seem to be using the word "symbol" in a strange way, because it's
not a hard problem for a logic engine to create new symbols according to
the ordinary usage of that word.... But I'll try to answer the spirit of
your question in spite of not fully understanding your use of terminology.
I'll describe what would happen if we coupled Novamente to a third-party
neural-net vision-processing engine, a scenario that I've thought about
before, though there are no immediate plans to do so.
Let's assume that the input neurons of the vision engine match up to
pixels on a camera, and that the output neurons of the vision engine
are calculated from these inputs.
The output neurons of the vision engine then map into certain node
types in Novamente, call them VisionNodes (such things don't exist
in the current Novamente). Novamente records relationships of the form
"The output of VisionNode n at time T is .37" and so forth. It then
has the job of recognizing patterns among these relationships, using
all the tools at its disposal, including
-- probabilistic reasoning (in both fast, lightweight and sophisticated,
heavyweight variants)
-- stochastic frequent itemset mining
-- evolutionary learning on program-trees representing complex predicates
These patterns are embodied as nodes and links in Novamente's knowledge
base. Further patterns may be learned connecting the relationships
abstracted from visual input to relationships learned via other means,
say linguistic, acoustic, or imported from databases.
Now, consider for instance a pattern corresponding to the concept of
a "chair." This may be represented by a single PredicateNode within
Novamente. But when this PredicateNode is activated, this activation
causes other nodes and links within Novamente to be activated as well,
via multiple iterations of Novamente's attention allocation dynamics.
Thus, there is an "attractor pattern" corresponding to "chair" in
Novamente, as well as a specific node corresponding to "chair."
Novamente's wired-in probabilistic reasoning system allows it to
reason in a probabilistically correct way (albeit with some errors
due to heuristics) about the PredicateNode embodying the abstracted
concept of "chair." But, the dynamics of attractors in Novamente
will also lead the "chair" attractor to interact with other attractors
in a way that roughly follows the rules of probability theory,
albeit with a lesser accuracy.
The human brain seems to have this higher-level "emergent logic"
coming out of interactions between attractor patterns. It doesn't
have precise probabilistic logic wired in at the lower level. On
the other hand, what it does apparently have is Hebbian learning of
some form wired in at the lower level, and it's not hard to see
(and I've argued in detail elsewhere) that Hebbian learning is
basically a noisy, slow way of accomplishing probabilistic inference.
So I would say the brain does have these two levels just like
Novamente. The brain has lower-level reasoning consisting of
Hebbian Learning and higher-level reasoning that is emergent
from attractor patterns; whereas Novamente has lower-level
reasoning consistent of PTL logic and higher-level reasoning
that is emergent from attractor patterns.
Both in the brain and in Novamente, the relationships between these
two different levels of reasoning is quite subtle.
Regarding the interaction with the vision system, what happens if
Novamente's understanding of the scene around it isn't good enough?
Well it can try harder to recognize patterns in the output of the
vision engine. Or if it wants it can manipulate the parameters of
the vision engine to try to improve the output to cause there to be
better patterns in it.
Finally, the vision system may need to access long-term memory to
help it recognize things better. In this case, there needs to
be some feedback, wherein output patterns are sent to Novamente,
then Novamente processes these, and then Novamente stimulates
the VisionNodes mentioned above (or other separate VisionNodes)
with "visual memory" of similar things it's seen before. The
neural net vision system may then use this visual memory to guide
its processing.
Of course, in this email I have only outlined a few aspects of
mental processing, which is a very complex matter, but perhaps
I've addressed some of the issues that were concerning you in
your e-mail.
-- Ben
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT