From: Dale Johnstone (DaleJohnstone@email.com)
Date: Mon Mar 12 2001 - 21:22:33 MST
>Brian: These chips would be over 500 times faster than the best
Athlons
>or Pentium 4s we have now. And in only 5 years. WOW
>
>Eliezer: If this is true, a medium-sized research project should
>definitely be able to buy enough power for "true AI" by 2005-2007. Of
>course, that's hardware rather than software - but still.
>
>
>URL: http://dailynews.yahoo.com/h/nf/20010312/tc/8100_1.html
Maybe even sooner than that:
"Production of the new chip is expected to start in 2004."
http://news.bbc.co.uk/hi/english/business/newsid_1216000/1216551.stm
Assuming Microsoft's XBox is a success, Sony will push hard to regain
market share. They won't want this timescale to slip.
New consoles are never released without new software available.
Selected developers will have access ~6-12 months before consumers
(although not always in the same pretty box).
However, my guess is that this chip will not be a general purpose CPU,
but a graphics processor. (Modern GPUs typically have more gates than
their contemporary CPUs.)
"The chip will also be capable of massive parallel processing -
dividing up complex or time-consuming processing tasks among many
chips..."
This sounds like a classic divide-and-conquor strategy used to break
the rendering task into smaller pieces. In this approach the screen in
subdivided into smaller areas for parallel rendering. It also allows an
easy upgrade path by simply bolting on more processors (a la 3Dfx SLI).
I'm not sure how useful a graphics processor would be for general AI
work, but I suspect it's possible to implement some kind of neural
style processing with judicious use of texture compositing hardware in
combination with the z-buffer (or stencil buffer). OpenGL shadow
effects use similar ideas. Another possibility is coopting the
Transform & Lighting stage for general purpose matrix processing. You
can do this today on hardware T&L cards - the API already exists in
DirectX. I haven't heard of anyone expoiting this yet, mind you, hardly
anybody makes use of the powerful SIMD instructions they already have
in their processors anyway. Perhaps Intel should have put Visual Basic
instructions on chip instead. <smirk>
Another thing I've seen recently - powerful accelerator cards used for
video post production work (think hardware accelerated Photoshop
filters on wheels) - very interesting.
Ben: Have you thought about compressing your data before paging to/from
disk? And why use Java anyway? Nice language to work with, but it blows
goats in the memory use department... :)
Regards,
Dale Johnstone.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT