From: Peter Voss (peter@optimal.org)
Date: Sat May 04 2002 - 13:10:37 MDT
I generally agree with Ben's response to Eli's review.
Specifically, I agree that Real AI (including Seed AI) will require a
simpler, cleaner design rather than the kind of complexity that Eli seems to
call for. Really understanding what general intelligence is, and what its
essentials are, is the key to an effective implementation.
Unlike Novamente, the a2i2 approach focuses on the basic cognitive
mechanisms that lead to high-level thought (including reasoning & formal
logic). We firmly believe that dog-level ('general') intelligence is an
extremely significant step towards human-level AGI. In fact, we take this to
be the real hurdle. This view is based on significant research, and is more
than a vague intuition.
Anyway - back to work....
Peter
-----Original Message-----
...
> My overall reaction is that Novamente is much, much simpler than
> I had been
> visualizing from Ben's descriptions;
Actually we have been explicitly *striving* for simplicity. Webmind was
more complex with more functionally specialized parts. I look at the
greater simplicity of Novamente as an advantage. Of course, the design is
highly flexible so that we can create greater specialization if it's needed.
This is a philosophical difference however. You seem to believe that an AI
design has to be very complicated. I think Novamente is still too
complicated, and that in a good design, a heck of a lot of the complexity of
mind should emerge rather than being part of the explicit design. Of
course the design has to be made with the proper sorts of emergence
explicitly in mind, and one of the many shortcomings of the current
Novamente manuscript version is that it doesn't focus on this enough.
....
>> However, from my perspective, Novamente has very *simple* behaviors for
> inference, attention, generalization, and evolutionary programming.
We have tried to simplify these basic cognitive processes as much as
possible.
The complexity of cognition is intended to emerge from the self-organizing
interaction of the right set of simple processes on a large set of
information. NOT from complexity of the basic behaviors.
...
> Novamente does not have the complexity that
> would render
> these problems tractable; the processes may intersect in a common
> representation but the processes themselves are generic.
If by "generic" you mean that Novamente's basic cognitive processes are not
functionally specialized, you are correct.
And I think this is as it should be.
...
> if Novamente's current behaviors can give rise to full
> cognition at
> higher levels of organization, it would make Novamente a mind so
> absolutely
> alien that it would make a human and a Friendly AI look like
> cousins.
Yes, I agree, if Novamente becomes a mind it will be a very alien mind. We
are not trying to emulate human intelligence, not at all. Equal and
surpass, but not emulate.
To emulate human intelligence on a digital computer, we need: a) way bigger
computers, b) way more understanding of how the brain works.
The only hope for the short run, in my view, is to seek to build a very
alien intelligence, one that exploits the unique power of digital computers
rather than trying to emulate the brain and its dynamics in any detail.
> The lower
> levels of Novamente were designed with the belief that these lower levels,
> in themselves, implemented cognition, not with the intent that these low
> levels should support higher levels of organization.
This is completely untrue. You were not there when we designed these
levels, so how on Earth can you make this presumption??
I spent the 8 years before starting designing Webmind, writing books and
paper on self-organization and emergence in the mind. (See especially
Chaotic Logic and From Complexity to Creativity)
OF COURSE, I did not design the lower levels of the system without the
emergence of a higher level of structure and dynamics as a key goal.
PV: Ditto
> By the standards I would apply to real AI, Novamente is
> architecturally very
> simple and is built around a relative handful of generic
> behaviors; I do not
> believe that Novamente as it stands can support Ben's stated goals of
> general intelligence, seed AI, or even the existence of substantial
> intelligence on higher levels of organization.
You are right: Novamente is architecturally relatively simple and is built
around a relative handful of generic behaviors.
It is not all THAT simple of course: it will definitely be 100,000-200,000
lines of C++ code when finished, and it involves around 20 different mental
dynamics. But it is a lot simpler than Eliezer would like. And I think its
*relative* simplicity is a good thing.
I suspect that an AI system with 200 more specialized mental dynamics,
rather than 20 generic ones, would be effectively impossible for a team of
humans to program, debug and test. So: Eliezer, I think that IF you're
right about the level of complexity needed (which I doubt), THEN Kurzweil is
also right that the only viable approach to real AI is to emulate human
brain-biology in silico. Because I think that implementing a system 10
times more complex than Novamente via software engineering rather than
brain-emulation is not going to be feasible.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT