From: Samantha Atkins (firstname.lastname@example.org)
Date: Wed Apr 11 2001 - 02:49:20 MDT
James Rogers wrote:
> At 02:37 PM 4/9/2001 -0700, Dani Eder wrote:
> >My rough estimate of human-level cpu power is
> >10^11 neurons x 10^4 synapses x 100 Hz = 10^17 bits/s
> >= 3000 Tflops.
> This doesn't seem quite right. You are basically describing something that
> can do 10^17 ops on 10^15 data structures per second. The big problem with
> this is that it would require an obscenely fast memory bus (say 8-megabits
> wide at 8-terahertz) to feed the processor core in the best case
Are you assuming a "processor core"? Is there an assumption of basically
von Neumann architecthure? What if the architecture is massively
parallel with much of the processing distributed amoung the memory bits
and along the interconnections between memory nodes and processing
nodes? These processors could be relatively simple and quite diverse.
This would largely obviate the need for a super super fast memory bus.
Given the tech for this kind of memory bus, the memory required
> (which would fit snugly in a 64-bit address space) and the processor core
> would already be solved problems. Massive distributed computing won't work
> because the effective memory bandwidth is so low that the actual throughput
> will be orders of magnitude less than suggested by simply aggregating the
> abilities of individual processors.
Yes, but obviously, massively parallel processing systems with even
ultra slow memory and processors do work to generate human level
intelligence. So massively parallel processing systems with several
orders of magnitude faster processors and memory should be able to work
orders of magnitude better (minus many engineering difficulties).
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT