Re: Human-level CPU power crossover date

From: James Higgins (jameshiggins@earthlink.net)
Date: Wed Apr 11 2001 - 09:10:14 MDT


At 01:49 AM 4/11/2001 -0700, you wrote:
>James Rogers wrote:
> >
> > At 02:37 PM 4/9/2001 -0700, Dani Eder wrote:
> > >My rough estimate of human-level cpu power is
> > >10^11 neurons x 10^4 synapses x 100 Hz = 10^17 bits/s
> > >= 3000 Tflops.
> >
> > This doesn't seem quite right. You are basically describing something that
> > can do 10^17 ops on 10^15 data structures per second. The big problem with
> > this is that it would require an obscenely fast memory bus (say 8-megabits
> > wide at 8-terahertz) to feed the processor core in the best case
> > scenario.
>
>Are you assuming a "processor core"? Is there an assumption of basically
>von Neumann architecthure? What if the architecture is massively
>parallel with much of the processing distributed amoung the memory bits
>and along the interconnections between memory nodes and processing
>nodes? These processors could be relatively simple and quite diverse.
>This would largely obviate the need for a super super fast memory bus.
>
> Given the tech for this kind of memory bus, the memory required
> > (which would fit snugly in a 64-bit address space) and the processor core
> > would already be solved problems. Massive distributed computing won't work
> > because the effective memory bandwidth is so low that the actual throughput
> > will be orders of magnitude less than suggested by simply aggregating the
> > abilities of individual processors.
> >
>
>Yes, but obviously, massively parallel processing systems with even
>ultra slow memory and processors do work to generate human level
>intelligence. So massively parallel processing systems with several
>orders of magnitude faster processors and memory should be able to work
>orders of magnitude better (minus many engineering difficulties).
>
>- samantha

That may be the way it ends up getting done. But the problem is that I
believe the common AI techniques generally require for a huge,
interconnected knowledge store. This knowledge core is then used by
software that generally doesn't get horribly complex. The KS pretty much
controls what happens by the software analyzing & modifying (for learning)
it in various ways. This can further get more complex in that some AI
systems actually store the knowledge as program fragments that are written
by the system itself.

Such an architecture was be very difficult to map on a highly parallel
system which had very limited RAM access per CPU. Massive parallel
programming is a very different beast than the programming the vast
majority of programers do, and it requires a different set of tools and
ways to look at problems. If we were forced to use this type of a system
it would probably have a serious impact on the implementation timeline,
possibly making the availability of actual hardware irrelevant for a short
while.

Of course Eliezer has a much better idea on this, I'm certain, since he
knows about the software they intend to create.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT