Re: AI hardware was 'Singularity Realism'

From: J. Andrew Rogers (andrew@ceruleansystems.com)
Date: Mon Mar 08 2004 - 00:54:18 MST


On Mar 7, 2004, at 10:14 PM, Keith Henson wrote:
> By memory measurement, my home computer has the potential to be
> something like 500 times as smart as I am (counting the disk).

No, it might be about "as smart" in some abstract sense, but not 500
times more. Again, you aren't getting a gigabyte of information
encoded for every gigabyte of disk you have. If the moon and stars
aligned, and you wrote the almighty AI program, you would still be off
by an order of magnitude efficiency-wise as a simple consequence of the
kind of hardware you are using. The number of bits I need to encode
something and the number of bits I actually use are two very different
things, even with a great deal of effort.

But more importantly, counting disk, your memory is WAY too slow. This
is a good example of time complexity effectively killing the deal when
space complexity is okay.

> I don't think memory is going to be a problem for AI. I think the
> main problem is going to be trying to trade off memory against limited
> processor power.

What do you need all that processor for?

The real hardware bottleneck is actually memory latency, and it is a
pretty severe bottleneck. Doesn't matter how fast your processor is if
it is starved for data. This has been discussed repeatedly in the
archives of this list. Even the best RAM subsystems found in commodity
systems are treading on inadequate, never mind using disk swap.

> You might be right. I suspect though that you are misled. I think
> that the data and processors are mixed together in the brain in a way
> that will be difficult to simulate in serial processors--assuming this
> is actually needed for AI.

Serial == Parallel. Six of one, half dozen of the other.
Mathematically equivalent in every way.

The hard part of the simulation is getting the kind of effective
latency that the brain gets when channeling everything through a small
number of memory busses to a small number of processors. The brain may
be slow, but everything is local. The brain "processors" can reference
more objects in a second in aggregate than the fastest computers we
make.

> I am not sure there is evidence that neural networks have much
> relevance to biological brains.

I don't follow artificial neural networks, as I could probably be
classified as a detractor of them. I do occasionally take a look at
biological neural structure stuff though.

> If you are trying to do pattern matching with coefficients, isn't
> that equivalent?

Pattern matching with coefficients...?

I'm doing something different, and something that has little relation
(theoretical or coincidental) to ANNs. It is more convergent with the
biological structures and models, but this is accidental and
unimportant.

> If you have a couple of hundred million processors, which I think is a
> good number to consider, then each can have a few hundred bytes
> without having to bother with compression.

This is completely missing the point. At that resolution, there is no
compression. But how do you store a megabyte of information
efficiently in such a system? There are many layers, and it isn't like
you would want to do bit-slicing anyway.

> I think it is worth noting that the closest kind of projects to AI
> like the Google search engine *are* massively parallel.

Again, parallel == serial. How "massively parallel" an application can
be depends on the memory latency requirements between nodes. For a lot
of codes (including mine, unfortunately) you cannot do useful
parallelization unless you have an inter-node latency on the order of
<1us. This has been thoroughly discussed on supercomputing lists, and
is also the reason huge ccNUMA supercomputers can *still* school a
commodity computing cluster for many apps.

This reflects the very unbalanced architecture of modern commodity
computers. Lots of processing power, but extremely poor at working on
very large fine-grained data structures because of weak, slow memory.
True supercomputers don't have this weakness and is why a Cray at
800MHz can school a Pentium 4 by an order of magnitude on some codes,
but they also cost a small fortune. The brain, for all its weaknesses,
is a well-balanced computing architecture.

j. andrew rogers



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT