From: James Rogers (email@example.com)
Date: Wed Apr 11 2001 - 13:13:43 MDT
At 01:49 AM 4/11/2001 -0700, Samantha Atkins wrote:
>James Rogers wrote:
> > This doesn't seem quite right. You are basically describing something that
> > can do 10^17 ops on 10^15 data structures per second. The big problem with
> > this is that it would require an obscenely fast memory bus (say 8-megabits
> > wide at 8-terahertz) to feed the processor core in the best case
> > scenario.
>Are you assuming a "processor core"? Is there an assumption of basically
>von Neumann architecthure? What if the architecture is massively
>parallel with much of the processing distributed amoung the memory bits
>and along the interconnections between memory nodes and processing
>nodes? These processors could be relatively simple and quite diverse.
>This would largely obviate the need for a super super fast memory bus.
The assumption I made was that AI requires a shared memory model, such that
partitioning the memory into any number of pieces would necessarily create
a large number of run-time dependencies between the partitions. If the
actual software model can be characterized in this way (and to me it seems
it would be), then the effective memory bandwidth/latency becomes the
aggregate bandwidth/latency of the interconnects in massively parallel
systems. The exact scaling is dependent on the interconnect topology and
the nature of the computational problem (shades of Amdahl's/Gustafson's Law).
Many (most?) computations scale poorly on massively parallel systems,
generally showing strongly sub-linear performance gains. A few problems
actually show super-linear performance gains i.e. 4 processors offer a
5-fold performance improvement over a single processor. The scalability
factor of software on massively parallel systems can be reduced to the
effective memory bandwidth of the system, which is a complex function of
both the software and hardware designs. Super-linear scalability (which
most people's intuition would suggest is impossible) does not seem feasible
if you look at hardware or software individually, but manifests itself in
the interactions of software with the hardware.
>Yes, but obviously, massively parallel processing systems with even
>ultra slow memory and processors do work to generate human level
>intelligence. So massively parallel processing systems with several
>orders of magnitude faster processors and memory should be able to work
>orders of magnitude better (minus many engineering difficulties).
I'm not saying it is impossible, but rather that the interconnect topology
of massively parallel silicon is a pathetic joke compared to the
brain. The brain may be computationally slow but it has extraordinary
width, *way* more width than we can get with our current interconnect
technologies. Problems are bound by speed or bandwidth, but I think it has
been pretty firmly established that computational speed is not the driving
factor in intelligence. Unfortunately, bandwidth hasn't been nearly as
forthcoming as speed technologically.
This has led me to believe that Moore's law is not a significant factor in
the advent of an AI driven singularity. The growth of bandwidth is
exponential, but it is doubling at a *much* slower rate than computational
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT