From: Stuart, Ian (Ian.Stuart@woolpert.com)
Date: Fri May 20 2005 - 09:29:38 MDT
>Depending on what
>you think 'accurate enough' is, the computing power
>required is estimated to be in the 100 to 100,000
>TFlop
>range. The most powerful computer ever built,
>Blue Gene-L, has just cracked the lower boundary
>of that range. It will be a few more years before
>any AI researchers get their hands on a machine that
>powerful. Thus, so far, inadequate computing
>power has controlled at that end.
>When 100 TFlop desktop PCs have been around for a
>decade and we still don't have an AI, I'll tend to
>think there is some other obstacle at work. Until
>then or until someone provides convincing reasoning
>that it will take vastly less computing power than
>the rough estimates above, I'll tend to think the
>problem has been insufficient computing power.
100 to 100,000 Tflops assumes simulating a parallel process on serial
computing substrate correct? How then does the advent of parallel hardware
such as the Cell Processor (2 Tflops each) factor into the calculation? 50
Cell Processors (50 Playstation 3's) is the functional equivalent of the low
end of your computation range assuming that the AGI software can broken into
apulets to run on the parallel cores, and while, at slightly less than $500,
50 is more than I personally will be springing for, it seems well within
most serious research budgets.
For reference, here is a semi-technical writup on the Cell:
http://www.blachford.info/computer/Cells/Cell0.html; however, it should be
noted that I am not asking specifically about the Cell processor and it's
effect on AI research, rather I am wondering about the repercussions of
cheaply available parallel, vector processing previously found primarily in
supercomputers (Cray XMP or YMP).
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT