From: Ben Goertzel (ben@goertzel.org)
Date: Sun Jun 02 2002 - 07:15:30 MDT
Eugene,
Of course, there is a great deal of truth in all that you say. I'd love to
have cheap, frequently-rewritable Field-Programmable Gate Arrays too ;>
But to me, the factors you mention are just "reasons why progress hasn't
gone *even faster* than it has."
They don't change the fact that, in practical terms, one can do a lot more
with $1000 (or $10,000, or $100,000) of hardware than one could a few years
in the past. And I think something similar is true with software. A team
of good coders, in a month, can get a lot more done now than a decade ago,
because of the Net's resources, because of visual debuggers, etc. etc. etc.
-- Ben G
> -----Original Message-----
> From: owner-sl4@sysopmind.com [mailto:owner-sl4@sysopmind.com]On Behalf
> Of Eugen Leitl
> Sent: Sunday, June 02, 2002 4:38 AM
> To: sl4@sysopmind.com
> Subject: RE: software progress (RE: Hardware Progress: $319/GF)
>
>
> On Sat, 1 Jun 2002, Ben Goertzel wrote:
>
> > Saying software tech hasn't progressed because it hasn't
> progressed in *your
> > directions of choice* is like saying hardware tech hasn't
> progressed because
> > it hasn't moved substantially toward massively parallel computing...
>
> Actually, hardware is not doing well at all. Software and hardware are
> locked in a tango, where no player can make significant advances on his
> own. Progress, if/when it happens, is thus necessarily incremental.
>
> All-purpose performance vs. transistor count has been lagging for
> a while.
> We know that complexity is a dead end as high clock is concerned, yet the
> CPU cores are getting more complex, not less. Asynchronous logic doesn't
> really help here. Fraction of utilized CPU transistors for a task is
> getting worse as Moore advances. There is no hardware message passing in
> any mainstream CPU architecture. Because current software model is mired
> in bloat, embedded DRAM has a hard time, since on-die grain size is
> limited by yield reasons. Because of absence of embedded DRAM the nonburst
> memory bandwidth increase is stagnating. Bad die yield goes up
> exponentially with die size for a given process, but since our software
> model doesn't do parallelism, there's pressure for as large dies as
> economically possible. This makes wafer scale integration, which requires
> semiquantitative yields (~90%) impossible. Reconfigurable logic is not
> there yet. Runtime reconfigurable, adaptive logic has so far only been
> prototyped for stochastic architectures. Cellular architectures (the only
> way to go to high clocks and defect tolerance) are barely on the drawing
> boards.
>
> Given above toxic legacy, we're drawing orders of magnitude less
> performance from a fixed amount of silicon real estate (say, a 300 mm
> wafer in a modern process) than what would be possible in theory. Given
> that premise, I'm distinctly underwhelmed by the claims, which just go GHz
> MBytes magic mantra.
>
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT