RE: [agi] Artificial General Intelligence Research - Help Wanted

From: Ben Goertzel (ben@goertzel.org)
Date: Thu Dec 23 2004 - 07:56:19 MST


A follow-up to my previous email...

In a private email, Moshe Looks added a common complaint that I'd forgotten:

----
Complaint: Learning to be intelligent isn’t possible without building up
abstract cognitions hierarchically from a foundation of content-rich sensory
and action streams. While Novamente can deal with content-rich sensory and
action streams in principle, it’s not really centrally designed for this.
Work on AGI should begin with study of perception and action, and then one
should ask what sort of cognition naturally goes along with one’s working
perception and action modules – and the answer may or may not look like
Novamente.
Answer: Our intuition is that content-rich media aren’t critical, rather
that what’s important for learning to think is interaction with other minds
in a shared perceptual environment in which you’re embodied. However, if
content-rich media are critical, Novamente can be used for rich sensorimotor
processing perfectly well. While it’s always possible to code specialized
processing code for each type of sensor and actuator, we believe it’s better
to begin with a common framework (such as BOA+PTL) and then specialize it to
deal with the different modalities. This is conceptually analogous to the
way the brain uses the same basic neural mechanisms to deal with the
different human modalities, and also with cognition.
----
-- Ben
> Complaint: The design is too complicated, there are too many
> parts to coordinate, too many things that could go wrong
>
> Answer: Yes it IS complicated, and we wish it were simpler, but
> we haven’t found a simpler design that doesn’t seem patently
> unworkable. Note that the human brain is also mighty complicated
> – this may just be the nature of making general intelligence work
> with limited resources.
>
> Complaint: BOA and PTL are not enough, you need some kind of more
> fundamentally innovative, efficient, or (whatever) learning
> algorithm. This complaint never comes along with any suggestion
> regarding what this "mystery algorithm" might be, though – most
> often it is hypothesized that detailed understanding of the human
> brain will reveal it.
>
> Answer: This is possible, but it seems to us that a hybrid of BOA
> and PTL will be enough. The question is whether deeper
> integration of BOA and PTL than we’ve done now will allow BOA
> learning of reasonably large (500-1000 node) combinator trees.
> If so, then we almost surely don’t need any other learning
> algorithm, though other algorithms may be helpful.
>
> Complaint: You’re programming in too much stuff: you should be
> making more of a pure self-organizing learning system without so
> many in-built rules and heuristics
>
> Answer: Well, the human brain seems to have a lot of stuff
> programmed in, as well as a robust capability for self-organizing
> learning. Conceptually, we love the idea of a pure
> self-organizing learning system as much as anyone, but it doesn’t
> seem to be feasible given realistic time and processing power and
> memory constraints.
>
> Complaint: Programming explicit logical rules is just wrong;
> logic should occur as an emergent phenomenon from more
> fundamental subsymbolic dynamics
>
> Answer: Probabilistic logic is not necessarily symbolic; in the
> Novamente design we use PTL for both subsymbolic and symbolic
> learning, which we believe is a highly elegant approach. The
> differences between subsymbolic probabilistic logic and e.g.
> Hebbian learning are not really very great when you look at them
> mathematically rather than in terms of verbiage. The Novamente
> design is not tied to programming-in logical knowledge a la Cyc.
> It’s true that the PTL rules are programmed in (though in
> Novamente 2.0 they will be made adaptable), but this isn’t so
> different from the brain having particular kinds of long-term
> potentiation wired in, is it?


This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:50 MST