From: Ben Goertzel (ben@goertzel.org)
Date: Fri Sep 09 2005 - 04:57:03 MDT
Hi,
> Off-the-shelf distributed systems that you can put together for
> modest cash
> today will spank the bejeezus out of a Thinking Machines CM-5 in terms of
> bandwidth, latency, and of course raw crunch. Moore's law and all that.
> You can replicate the model in your basement for not much money, given a
> good reason to.
Yes, of course this is true. And of course, this is not the most effective
way of using modern computing hardware.
> Obviously, people designing MP-based MIMD infrastructure systems will be
> solving the same kinds of problems you were solving on a
> Connection Machine,
> particularly since the MP fabric is actually more capable than those old
> machines.
As it happens, the things I was doing with the CM-5 were fairly simple,
basically involving solving numerical analysis problems. This was the
case with most CM applications, the killer app was fluid dynamics
simulation.
However, the problems involved with implementing numerical analysis
algorithms OR infrastructure-type problems on an MIMD parallel substrate
are quite different from those involved with implementing AGI on an MIMD
parallel substrate.
Having said that, I should add that it would not be hard at all to modify
the Novamente architecture for an MIMD substrate. This would involve
many changes but most of them would be for the better. In fact we've had
to bend over backward to make the Novamente math and concepts match up
with the distributed-computing-over-a-small-network-of-von-Neumann-machines
hardware we have available. The math and concepts would mostly match up
more simply with an MIMD parallel infrastructure. Although an ideal
practical hardware infrastructure would have a mixture, with a large MIMD
component as well as a very powerful traditional serial-computing
component (because there are some parts of the design that don't
parallelize as nicely as others).
Specifically, hypothesis formation in Novamente is done via evolutionary
methods that are extremely easily and effectively parallelized. On the
other hand, for purposes of attention allocation and assignment of credit,
it's more effective to have centralized information-integration and
modeling (though parallelization is possible there is a moderately high
cost involved).
IMO assigment of credit is a good example of an AI problem that no one
has figured out how to parallelize effectively yet. Traditional AI
approaches to assignment of credit such as Q-learning, Holland's
classifiers, or Baum's Hayek are elegantly parallel in nature, yet
highly ineffective. In Novamente we have opted for a more centralized
and probabilistic solution, which I believe will prove much more
effective, though this aspect of the Novamente design remains mainly
unexplored in practice so far (our current work with Novamente requires
only very simple assignment of credit). I don't know of any efficient
parallel algorithm for doing credit assignment in the context of
adaptive learning of complex behaviors in a complex environment. The
probabilistic approach we are using in Novamente contains some aspects
that are not all that easily parallelized, although I'm sure good
approaches to parallelization would present themselves if we had the
hardware demanding us to come up with such approaches...
-- Ben G
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT