From: Keith Henson (hkhenson@rogers.com)
Date: Mon Mar 08 2004 - 19:10:07 MST
At 11:54 PM 07/03/04 -0800, Andrew wrote:
>On Mar 7, 2004, at 10:14 PM, Keith Henson wrote:
snip
>>I don't think memory is going to be a problem for AI. I think the main
>>problem is going to be trying to trade off memory against limited
>>processor power.
>
>What do you need all that processor for?
>
>The real hardware bottleneck is actually memory latency, and it is a
>pretty severe bottleneck. Doesn't matter how fast your processor is if it
>is starved for data. This has been discussed repeatedly in the archives
>of this list. Even the best RAM subsystems found in commodity systems are
>treading on inadequate, never mind using disk swap.
I am *assuming* you are going to have gobs of processors. But processors
are relatively expensive compared to memory.
>>You might be right. I suspect though that you are misled. I think that
>>the data and processors are mixed together in the brain in a way that
>>will be difficult to simulate in serial processors--assuming this is
>>actually needed for AI.
>
>Serial == Parallel. Six of one, half dozen of the other.
>Mathematically equivalent in every way.
I don't think that a billion GHz processor is in the cards *ever* where a
billion one GHz processors are certainly possible. (A GHz is a nanosecond,
in a nanosecond light goes a foot. So a G-squared processor would have to
have elements spaced 1/3 of a nanometer apart talking at light
speed. Point being that at some speed, you just *can't* go faster.)
>The hard part of the simulation is getting the kind of effective latency
>that the brain gets when channeling everything through a small number of
>memory busses to a small number of processors. The brain may be slow, but
>everything is local. The brain "processors" can reference more objects in
>a second in aggregate than the fastest computers we make.
Exactly.
snip
>>If you have a couple of hundred million processors, which I think is a
>>good number to consider, then each can have a few hundred bytes without
>>having to bother with compression.
>
>This is completely missing the point. At that resolution, there is no
>compression. But how do you store a megabyte of information efficiently
>in such a system?
Bit at a time. :-)
>There are many layers, and it isn't like you would want to do bit-slicing
>anyway.
>
>>I think it is worth noting that the closest kind of projects to AI like
>>the Google search engine *are* massively parallel.
>
>Again, parallel == serial. How "massively parallel" an application can be
>depends on the memory latency requirements between nodes.
It also depends a *lot* on the kind of problem you are trying to solve. If
you are pattern matching, you can *broadcast* what you are looking for.
About a decade ago I was involved with a massive office automation
project. The most primitive objects were scanned pages. The server burned
the image data into CD ROM and stored them in jukeboxes. It kept the most
recent 30 Gbytes besides distributing the images to a push off the end file
in an assigned workstation. The jukebox was a reliable source of
information, but the performance really sucked (20 sec to load). I wrote a
little Perl script that ran in each workstation, listening to a common
port. When a workstation requested a page image, the server looked for it
in its local disk store. If it was not there, it pinged all the
workstations and got the page image from the first one to respond. If no
workstation responded, it loaded the file from a jukebox. Nineteen times
out of twenty the needed file was still on a work station--which made a
huge performance improvement.
>For a lot of codes (including mine, unfortunately) you cannot do useful
>parallelization unless you have an inter-node latency on the order of <1us.
Biologic brains are at least three orders of magnitude slower and still
work. How are they doing it?
>This has been thoroughly discussed on supercomputing lists, and is also
>the reason huge ccNUMA supercomputers can *still* school a commodity
>computing cluster for many apps.
>
>This reflects the very unbalanced architecture of modern commodity
>computers. Lots of processing power, but extremely poor at working on
>very large fine-grained data structures because of weak, slow memory.
>True supercomputers don't have this weakness and is why a Cray at 800MHz
>can school a Pentium 4 by an order of magnitude on some codes, but they
>also cost a small fortune. The brain, for all its weaknesses, is a
>well-balanced computing architecture.
Right. My guess is that it is *much* richer in processors. What little
memory it does have is close to the processors. You might find it
interesting to muse on a world in which processors cost about the same as a
few hundred bytes of memory. Would that cause you to rethink the design?
Keith Henson
>j. andrew rogers
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT