Re: Edge.org: Jaron Lanier

From: Perry E.Metzger (perry@piermont.com)
Date: Wed Nov 26 2003 - 14:00:54 MST


Eugen Leitl <eugen@leitl.org> writes:
> On Wed, Nov 26, 2003 at 02:15:35PM -0500, Perry E.Metzger wrote:
>> We've done much more work on the parallel architectures path than
>> people seem to remember. I'm a veteran of several parallel computation
>> projects in the late 1980s, like the DADO Machine, the Y Machine, etc.
>
> Computer science is now quite firmly in an established discipline
> (read: rigor mortis) mode.
> Nevermind the founders, early adopters have been littering the obits,
> and each subsequent generation seems to be bound on the task of
> rediscovering the wheel, only as a polygon. No new language has
> been able to transcend concepts pioneered in Lisp, the second oldest
> language after Fortran. The (already fading) XML hype is just badly
> reinvented SEXPRs in disguise. This is so pathetic, words fail me.

Is it? I'm not sure why it would be otherwise.

>From the point of view of societal changes and applications, computers
are in their infancy. However, as a scientific and mathematical
discipline, we've fairly extensively explored the space. There
continue to be dark corners, but you can't expect to have a field like
this continue to be equally fertile for genuinely new ideas forever,
can you?

> It is a bit like a Cambrian explosion being over, don't you agree?
> The crazy experimentation mode (and funds for R&D) are over, and
> the discipline has just stagnated. Is it just me, or doesn't CS
> smell a lot like a crypt full of mummies? Perhaps it's time to
> torch the place, and move on to something completely different.

Well, I'm personally doing that in so far as I'm much more interested
in other fields of study right now. It is a big universe.

>> Huge amounts of effort was expended on trying to produce new, parallel
>> paradigms for computation to take advantage of massively parallel
>> hardware. Hundreds of radical new designs were worked on, with all
>> sorts of innovative ideas -- everything from moving to smart memory
>> and dumb computation to data flow architectures to everything else you
>> can imagine.
>
> I remember. I was there, if only as an interested layman observer.
> It is hard to remain optimistic on the backdrop of so many lost hopes.

I don't see why this is a cause either for optimism or pessimism. One
tries experiments, some work and some do not.

>> The result of all of it was that you could produce some interesting
>> architectures for specific computational tasks, but producing
>> something that had general programming utility was damn hard. We just
>> don't know how to do it well.
>
> My point precisely. Our high-level architecture has a huge
> perception/performance deficit as far as parallelism is concerned.
> We can't do it, because we just can't. Can alternative entities (AIs, aliens)
> debug 10^6 concurrent threads in the same manner as we can 2-3?

I suspect that once we've gone transhuman and are no longer
constrained by our existing brain capacities, we'll be able to deal
with more complexity -- but remember, in certain areas like threading
some forms of complexity grow exponentially, and the utility of the
mechanisms is not as high as one might naively expect. It was only
about eight years ago that I understood that event driven systems are
much much cleaner, more efficient, and nicer to work with than
threaded systems....

> Is this at all possible, or is all massive parallelism unattainable
> but by algorithms driven stochastically from the roots up? I wish we knew.

We'll find out eventually. ;)

>> I don't think the problem was hardware. I think the problem is that
>
> Initial bits were dumb. We were stuck with the notion "smart central
> processing unit" vs "dumb storage, a pile of bits" even after we
> got bits being represented in the same structures as logic gates.
> It was a habitual bias we didn't even realize we had, because we're
> so used to the human processor sequentially hacking away at the
> dumb universe metaphor.

But people have tried playing with smart memories a lot during the
parallel processing research of the 80s, and they didn't get very far
with it. Various kinds of CAMs are in fact still in use -- most
routers are filled with content addressable memories -- but for the
most part the experiments ended with failures of various sorts.

>> techniques we don't understand. By contrast, the Von Neumann style lent
>> itself very easily to real world work.
>
> Becase we seem to see the world as a sequence of events. We can't
> jump outside of our skins.

Perhaps part of it is human psychology, perhaps part of it is that it
is legitimately hard to design things that don't follow that
paradigm.

Either way, though, I'm not going to call what we've done a
failure. Look around you. The very machine you're communicating with
-- and the vast global network it is attached to -- can hardly be
called a failure.

-- 
Perry E. Metzger		perry@piermont.com


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:43 MDT