From: Ben Goertzel (ben@goertzel.org)
Date: Sun May 05 2002 - 19:23:30 MDT
Hello all,
I have posted two essays online, and will be curious for feedback from
interested members of this community.
Some leisure reading for the masochists in the audience... ;->
1) "Thoughts on AI Morality."
http://www.goertzel.org/dynapsyc/2002/AIMorality.htm
Last night I finally got the urge to write my long-nurtured riposte to
Eliezer's CFAI document. It is not nearly as long as CFAI, and I don't
think I will write a long document on the topic any time soon, as one of my
thoughts on the topic is that it is too early in the development of AGI to
say very much about it.
However, I think I did an OK job in this document of explaining why I
intuitively feel the "Friendly AI goal system" Eliezer has proposed is
unlikely to work.
Of course, I do not claim to have *proved* his approach will not work. I
don't think we have the tools to prove anything significant about AGI goal
systems or morality at this point. All I have done is to present my
intuition and give the reasons for it, which are deeply tied to my own
theory of mind.
Although I use some Novamente examples in this essay, the main points in the
essay are not specifically related to Novamente, but have to do with my
thoughts about the general nature of mind and AGI.
This essay is totally nontechnical.
2) "Hebbian Logic:The Emergence of Symbolic Reasoning in Formal Neural
Networks via Cell Assemblies and a Logic-Friendly Variant of Hebbian
Learning."
http://www.goertzel.org/dynapsyc/2002/HebbianLogic.htm
This presents an *avowedly speculative* theory of how advanced cognition
*may* emerge from neurodynamics.
I sent Eliezer and Peter and a few others a copy of this a month or two ago.
Unlike what Eliezer said in a recent post, however, Hebbian Logic is *not* a
theory of how small neural nets can give rise to emergent logical inference
in an efficient way. Rather, it is a theory of how *very large* neural nets
can give rise to emergent logical inference in a *very inefficient* way.
This one is a bit technical. It proposes some specific neural net learning
rules, variants of the standard Hebb rule, and makes some fairly hand-wavy
arguments that these learning rules seem likely to be able to give rise to
emergent behavior following the rules of probabilistic logic.
I don't think I've "cracked the nut of how the brain works" here or anything
like that. My objective was a lot less lofty: just to convince myself that
the gap between neurodynamics and logic is not as large as most people
think. It seems possible and not that tough to build a conceptual bridge
between the two levels.
This is definitively not a Novamente article (for that see
www.realai.net/article.htm). Novamente is not founded on Hebbian Logic.
One could seek to create a real AI based on Hebbian Logic; my intuition is
that this would lead to a vastly less efficient system than Novamente.
Hebbian Logic is a little closer conceptual to Peter Voss's approach than to
Novamente, though it is very different from Peter's approach in detail.
Conceptually, however, the ideas in this article emanate from the same
*philosophy* as Novamente does. Part of this philosophy is that the
distinction between logic-based AI and NN-based AI is largely bogus.
Semantics nets and neural nets, in my view, are not as different as they're
made out to be. Novamente embodies this philosophy by using a hybrid
semantic/neural network architecture. Hebbian Logic embodies it by making
hypotheses about how the human mind's semantic nets emerge from the human
brain's neural nets.
Yours,
Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT