From: Eugen Leitl (firstname.lastname@example.org)
Date: Mon Apr 08 2002 - 04:10:30 MDT
Haven't read the paper myself yet.
On Mon, 8 Apr 2002, Ben Houston wrote:
> Parallelism on the hardware level is currently supported by symmetric
> multiprocessing chip architectures [Hwang98], NOW
I would have mentioned SIMD in a register here, which most modern CPUs
utilize for multimedia operations speedup, and which already routinely
operates on 64 bit broad registers, with the trend going up. SMP is a
crappy kind of parallelism, since limited to non-memory-bottlenecked codes
which don't thrash the cache -- an unlikely assumption for AI.
> (network-of-workstations) clustering [Anderson95] and Beowulf clustering
> [Becker95], and message-passing APIs such as PVM [Geist93] and MPI
PVM isn't really a message-passing API, since it assumes a global shared
memory paradigm, which is unphysical and hence doesn't scale. Due to
unavailability of message-passing hardware supporting PVM most PVM
implementations emulate it by using MPI as transport layer.
> [Gropp94]. However, software-level parallelism is not handled well by
> present-day languages and is therefore likely to present one of the
> greatest challenges.
> I've seen some truly amazing things done in the computational
> pharmacology field dealing with cheap, but massive parallelization.
Most of pharmacology is virtual screening/docking, which is embarassingly
parallel. Basically it's a parametrized run, where each node adresses a
tiny subset of search space.
> Basically, a lot of short cuts are available in the parallelization of
> an algorithm once you've solidified it. In order words making a
> parallel problem solving is difficult and cost in the general case but
> in a specific case it can be quite cheap. The field of computational
> pharmacology is working with special purpose multi-teraflop machines
> that cost less than $1,000,000 US for a year or so now.
I had the impression that field was dominated by COTS machines (at least
most sales for COTS machines come from there). I'm not aware of any
current dedicated hardware for pharma screening, could you give us a few
> Even if software parallelism were well-supported, AI developers will
> still need to spend time explicitly thinking on how to parallelize
> cognitive processes - human cognition may be massively parallel on the
> lower levels, but the overall flow of cognition is still serial.
> Cognition, in my opinion, is quite parallel at all levels. There are,
> in my understanding, only a few bottlenecks in the brain that forces
> things to become serial. An obvious example would be the serial nature
> of linguistic output.
There's a big difference between what introspection tells you, and between
what objective measurements tell you. And the latter show plainly that
introspection is dead wrong. A few high-order signatures are serial,
that's true, but the processes themselves are massively parallel.
> We know it is possible to evolve a general intelligence that runs on a
> hundred trillion synapses with characteristic limiting speeds of
> approximately 200 spikes per second.
> 200 spikes/sec is probably the median for the brain. Some neurons I've
> studied in my courses have upper limits around 1000 spikes/sec.
The spikes/s are a misleading metric, if timing signals is being used, or
if pattern-based coding on multiple fibers is being used. We have evidence
that this is the case.
> Neglect the sensory and motor systems I believe that in the CNS 'S'
> would be upwards of at least 5 as a result of the DAG-like arrangements
> of the signal processing pathways -- ignoring backwards, regulatory
It seems to depend on the complexity of the stimuli, and on signalling
distance. Short reflexory pathways (blinking reflex) is different from a
high-order processing (adjustment of steering wheel when evading a sudden
obstacle on the road).
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT