RE: G. Chaitin on AI

From: Ben Goertzel (ben@goertzel.org)
Date: Sat Mar 02 2002 - 08:01:26 MST


> > "[M]y personal opinion is that AI is not a mathematical problem, it's an
> > engineering problem.. To me a human being is just a very complicated
> piece
> > of engineering that's exquisitely well-suited for surviving in this
> world..
>
> I have always thought this. But surely anyone who agrees with the
> "mind is a
> machine" theory, like I assume all of us on this list are, could
> infer that
> "reverse engineering" the mind inside a modern computer substrate
> is indeed
> an engineering problem.

Well, of course everyone who believes that the mind is a machine believes
that there is an engineering problem involved in building a mind.

But the question is: is the problem *primarily* one of engineering, or
*primarily* one of mathematics, or *primarily* one of neuroscience, or
*primarily* one of cognitive psychology, etc.

I know a few individuals who believe the mind is a machine, but who believe
there is some simple mathematical trick underlying mind operations, and that
if we just find this trick, then creating an artificial mind will be easy.
So they believe that figuring out the math is the main thing required.

Kurzweil and others believe that the main problem is figuring out exactly
how the human brain works. Once this is known, they reckon, it's just a
matter of emulating the brain on a sufficiently powerful computer, using a
simple neural simulator program (feeding in the exact distribution of
neurons, neurotransmitters, synapses, etc. as inputs).

Chaitin is making the statement that mind is mechanical, but he's also
making the statement that the task of constructing a thinking machine
requires *primarily* engineering-type thinking.

Of course, most of us probably hold views that are intermediate between
these extremes. It's the extreme views that get remembered and propagated
because they're so compact to state and simple to recall.

My own view is an intermediate one: I think it takes a mixture of
philosophy, neuroscience, math, engineering and psychology. When I started
out I underestimated the importance of the "engineering" part, but
recognizing that importance doesn't mean denigrating the importance of the
other aspects.

Chaitin's view is a little more like that of Danny Hillis (the parallel
computing pioneer), who has stated that he thinks intelligence is just "a
lot of little things all working together." Minsky's Society of Mind theory
is somewhat in this direction as well. These guys don't place much stock in
emergence, and in the need for different structures and dynamics to be
exquisitely harmonized together.

> > "[I]t's very often the case that theoreticians can show that in theory
> > there's no way to solve a problem, but software engineers can find a
> clever
> > algorithm that usually works, or that usually gives you a good
> approximation
> > in a reasonable amount of time.
>
> Maybe I'm thinking on a different plane here, but to me the clever
> algorithms are indeed a mathematical problem.

Some clever algorithms involve significant math, some don't.

In my own AI work, I have not yet had the opportunity to apply "deep math."
I.e. there have been no profound theorems proved about Novamente or Webmind
components, critical to the AI work.

On the other hand, there have been plenty of applications of known math,
e.g. probability theory, combinatory logic, nonlinear dynamics,....

To a real mathematician like Chaitin, working out fairly straightforward
applications of known math is not "doing math."

As a mathematician, I understood immediately this undertone in his
statement, but it may not be obvious to those who have not spent time in the
elitist, peculiar but fascinating & wonderful tribe of professional
mathematicians ;>

> > "We humans aren't artistic masterpieces of design, we're
> patched together,
> > bit by bit, and retouched every time that there's an emergency and the
> > design has to be changed!
>
> Sounds like the eXtreme Programming methodology! :-) Or more likely the
> timeless classic Waterfall method of software development.

The way we work is sort of like Extreme Programming. We start with a
mathematical design for a system component. Then an XP type process is used
to get the implementation working. Sometimes there's a detailed design
document prior to programming -- usually but not always, it depends on the
complexity of the component. If something really fucked comes up during
implementation or detailed design, then a new math approach needs to be
worked out; but this has happened only a few times (and all of them have
been *really* big deals...).

> So I still believe that a general AI could come from a cluster of
> 286's. All
> you need is enough of them working in parallel, not necessarily running
> exactly the same software, or individual OS. Of course, the more powerful
> computers help speed things up a bit, but it doesn't make the
> 286's useless.

Real AI could run on a network of 286's, but it would be very slow.

The problem is, each piece of knowledge in the mind needs to be frequently
interacting with a high percentage of other pieces of knowledge in the mind.
So, dividing the mind's knowledge up into little chunks of memory, one chunk
stored on each 286 and processed on each 286, is feasible -- but one is
going to have a HUGE amount of distributed processing going on. To make the
latency manageable, you introduce sophisticated adaptive load balancing and
so forth.

We designed and implemented a system like this at Webmind Inc., as an
infrastructure for running our AI system on. It did work, but it was slow,
and a huge pain in the butt to work with and debug. We never tried it on
286's, but plenty of low-end early Pentiums as well as snazzier better
machines.

In our current project we are focusing first on getting everything to work
together nicely on one machine, and then we will reimplement (a slicker
version of) the distributed processing framework.

Dealing with distributed processing so early in our project was a
significant mistake, because it took a lot of effort, and a distributed AI
system is vastly harder to debug than an isolated one. There are no real
testing tools that handle distributed nondeterministic self-organizing
systems. Testing tools for distributed systems that exist at present are
*very* simplistic and limited, mostly focused on application server setups.

On the other hand, through our experience, we now know exactly how to make
Novamente a distributed system when the time comes; so all the work we put
in on this was certainly not worthless.

Consider two scenarios:

A)
You have X amount of RAM on one machine, with P processors on that one
machine.

B)
You have the P processors and X amount of RAM spread among K machines, where
P/K is small (commonly 2-4)...

Our experience was that the speed of configuration B was *after copious
optimization* about 1/5 that of configuration A. This is not because of
network latency, it's because of all the software stuff you have to do to
make your system distributed-processing-friendly. It's possible this could
be improved to 1/3 or so. We did a lot of work on this in Java but also
some prototyping in C.

Also, the effective amount of information you can store in configuration B
is about 2/3 that in configuration A, because of the need to repeatedly
cache information to achieve reasonable time-efficiency.

- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT