Review of Novamente

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat May 04 2002 - 10:10:18 MDT


I now have enough information to offer preliminary reviews of Novamente and
A2I2.

First, though, a disclaimer: I am not a venture capitalist or an expert on
the AI market. When I analyze Ben's AI I am holding it to the standard of
the Singularity, because Ben has expressed a belief that Novamente occupies
a direct path to real AI and the Singularity. Three nice things I can
legitimately say about Novamente, if it's completed along the lines
described in Ben's manuscript:

1) When complete, Novamente will probably be the most advanced realized AI
system on Earth. IANAVC but it should have capabilities that other systems
don't, and Ben should be able to build something saleable out of it.

2) When complete, Novamente will exhibit capabilities that actually have
something to do with AI and aren't just algorithms wearing funny hats.

3) Novamente uses more than one idea, and the sum of the whole is greater
than the sum of the parts.

Okay, now for the *rest* of the review... some of the things I have to say
about Novamente are pretty harsh; Ben is aware of this and may choose to
defend Novamente. Any third parties who join the debate are asked to stick
to the nothing-personal rules of argument.

My overall reaction is that Novamente is much, much simpler than I had been
visualizing from Ben's descriptions; the completed system would be nowhere
near general intelligence or seed AI, and Ben's expectation that either of
these things would be in Novamente's reach seems very alien to me.

Capsule description of Novamente's architecture: Novamente's core
representation is a semantic net, with nodes such as "cat" and "fish", and
relations such as "eats". Some kind of emotional reaction is called for
here, lest others suspect me of secret sympathies for semantic networks:
"AAAARRRRGGGHHH!" Having gotten that over with, let's forge ahead.

Novamente's core representation is not entirely that of a classical AI; Ben
insists that it be described as "term logic" rather than "predicate logic",
meaning that it has quantitative truth values and quantitative attention
values (actually, Novamente can express more complex kinds of truth values
and attention values than simple quantities). Similarly, Novamente's
logical inference processes are also quantitative; fuzzy logic rather than
theorem proving. Novamente has complex behaviors that meet synergetically
in the common representation of the big semantic net, which is what raises
Novamente above one-idea AI; the major behaviors that stick out from my
perspective are (a) logical inference, (b) attention spreading, (c) mining
of generalizations, (d) evolutionary programming (mutation, recombination)
on term logic structures.

However, from my perspective, Novamente has very *simple* behaviors for
inference, attention, generalization, and evolutionary programming. For
example, Novamente notices spontaneous regularities by handing off the
problem to a generic data-mining algorithm on a separate server. The
evolutionary programming is classical evolutionary programming. The logical
inference has classical Bayesian semantics. Attention spreads outward like
ripples in a pond. Novamente does not have the complexity that would render
these problems tractable; the processes may intersect in a common
representation but the processes themselves are generic. Regardless of
whether Novamente is Turing-complete, it looks to me like only a tiny corner
of the problem space is tractable for it. "Genericity is not generality" is
the accusation I would level at most of AI. Novamente is not completely
generic and this raises it far above the level of the current crowd of
proposed Real AI designs, but the sum of four generic processes is not
enough to achieve generality.

Ben believes that Novamente will support another level of organization above
the current behaviors, so that inference/attention/mining/evolution of the
low level can support complex constructs on the high level. While I
naturally agree that having more than one level of organization is a step
forward, the idea of trying to build a mind on top of low-level behaviors
originally constructed to imitate inference and attention is... well,
Novamente is already the most alien thing I've ever tried to wrap my mind
around; if Novamente's current behaviors can give rise to full cognition at
higher levels of organization, it would make Novamente a mind so absolutely
alien that it would make a human and a Friendly AI look like cousins. But I
don't think Novamente can support general intelligence; I don't think that
the low-level behaviors implemented by Novamente (all of Novamente's current
behaviors are implemented at close to the token level) are ones that I don't
think can efficiently support high-level cognitive behaviors. The lower
levels of Novamente were designed with the belief that these lower levels,
in themselves, implemented cognition, not with the intent that these low
levels should support higher levels of organization. For example, Ben has
indicated that while he expects high-level inference on a separate level of
organization to emerge above the current low-level inferential behaviors, he
believes that it would be good to summarize the high-level patterns as
individual Novamente nodes so that the faster and more powerful low-level
inference mechanisms can operate on them directly.

To see a genuine AI capability, you have to strip away the suggestive
English names and look at what behaviors the system supports even if nobody
is interpreting it. When I look at Novamente through that lens, I see a
pattern-recognition system that may be capable of achieving limited goals
within the patterns it can recognize, although the goal system currently
described (and, as I understand, not yet implemented or tested) would permit
Novamente to achieve only a small fraction of the goals it should be capable
of representing. Checking with Ben confirmed that all of the old Webmind
system's successes were in the domain of pattern recognition, so it doesn't
look like my intuitions are off.

By the standards I would apply to real AI, Novamente is architecturally very
simple and is built around a relative handful of generic behaviors; I do not
believe that Novamente as it stands can support Ben's stated goals of
general intelligence, seed AI, or even the existence of substantial
intelligence on higher levels of organization.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT