From: Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Date: Wed Mar 27 2002 - 12:25:15 MST
-- Eugen* Leitl leitl
______________________________________________________________
ICBMTO: N48 04'14.8'' E11 36'41.2'' http://www.leitl.org
57F9CFD3: ED90 0433 EB74 E4A9 537F CFF5 86E7 629B 57F9 CFD3
---------- Forwarded message ----------
Date: Wed, 27 Mar 2002 13:08:58 -0500 (EST)
From: Robert G. Brown <rgb@phy.duke.edu>
To: Eugene Leitl <Eugene.Leitl@lrz.uni-muenchen.de>
Cc: Beowulf@beowulf.org
Subject: Re: Hardware Progress: $397 (fwd)
On Wed, 27 Mar 2002, Eugene Leitl wrote:
> > If optimistic estimates of the required computer
> > power for human-level AI are correct at 100 TFlop/s,
> > it presently costs $39.7M to buy a human's worth
> > of computers. I have estimated an 'economic
> > crossover' of $3M when computer intelligence
> > becomes cheaper than human intelligence. This is
> > based on a computer being able to put in 5x as
> > many productive hours as an average human, a 5 year
> > payback time on the hardware, and $120K as the total
> > cost per year of a technical professional. We are
> > therefore about 3.5 doublings in performance/$
> > away from economic crossover.
> >
> > Planned improvements in chip manufacturing should
> > get
> > us to that point within 4 years. AMD plans to be
> > producing chips with 65nm feature size by 2006,
> > which should lead to a 20x reduction in cost.
Well gee, time to start working on the software...:-)
That might take a LOT more than four years, as at this time I'd have to
say "computer intelligence" is still pretty much even more of an
oxymoron than "military intelligence", and it is by no means clear that
the key to solving it is "more" of anything -- I'd say instead that the
problem of intelligence (or self-awareness) is still algorithmic, not
processing power per se. We don't know how it works yet; at best we
crudely simulate it -- it may be that we could make our existing
computers "intelligent" if we only knew how. What they lack in
connections they can make up in speed, since human brains function
(admittedly in parallel) in chemical time, which is slooooow.
As far as work load and value are concerned, ANY computer today can do
far more work/second than a human can in its domain of application,
which is why we buy them. Anybody who wants to show up in my office
with some dice and a slide rule to take over my physics computations is
welcome, but be warned that I'll need your services for most of the rest
of the probable lifetime of Mr. Sun to make any real progress.
However, computers are better viewed as amplifiers of human abilities
than as replacements for humans. One human plus a computer may replace
many humans at certain tasks (like generating 10^18 or so steps in a
Markov Process and sampling them appropriately), but I cannot forsee a
time when the human is entirely removed from this equation until
(possibly when) computers are truly self aware and can dream up useful
or even useless work to do on their own.
On another note, I'd disagree somewhat with the extrapolation details of
this particular Moore's Law CBA -- for example, it seems unfair to base
performance on a theoretical measure of instruction latency under ideal
circumstances and then create a system with a LOT of relatively slow
memory that in actual application would cut your ideal floating point
performance by a factor of four or more. Then, it isn't clear that
floating point rates are as relevant to AI as (for example) integer
rates, although lacking anything like real AI this is open to argument.
Finally, it isn't clear why these particular hard disk and memory ratios
where chosen as part of the metric, especially given the fact that hard
disk has (recently, at least) expanded in capacity with a different
exponent than CPU and memory, and that "memory" is available in a wide
variety of speeds, widths, and (hence) cost-benefit optimum points.
For example, one notes that memory and hard disk are the most expensive
components in the "standard system" cited, that the use of 3x MORE
expensive DDR might get you closer to the theoretical FLOPS peak (but
still quite far away from my own direct experience), and that for
aggregate performance to have any meaning whatsoever in the attempt to
build an "intelligent cluster" (which seems to be where this is going)
one would be better off mostly neglecting hard disk capacity altogether
in favor of networking. Presuming (not unreasonably, based on the
nonlocality of neural models of intelligence) that "intelligence" is
likely to be a tightly coupled, synchronous parallel problem, the
network would be far important than CPU speed, memory size or speed, and
disk size or speed in determining what fraction of the theoretical node
in-cache peaks can actually be applied to the problem as a measure of
aggregate performance. However, networking speed is hard to include in
the metric -- it is pretty strongly bounded, has a LESS favorable
exponent in Moore's law, and there is both latency and bandwidth to
consider even then.
To conclude, this seems to be a moderately naive metric for systems
comparison and I wouldn't recommend that it be widely adopted:-)
None of which is intended as a flame, BTW. The idea was undoubtedly
presented more for fun than as a serious statement that in four years I
need to watch out or I might be replaced by a computer cluster, and fun
it is...;-)
rgb
-- Robert G. Brown http://www.phy.duke.edu/~rgb/ Duke University Dept. of Physics, Box 90305 Durham, N.C. 27708-0305 Phone: 1-919-660-2567 Fax: 919-660-2525 email:rgb@phy.duke.edu _______________________________________________ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT