From: Colin (email@example.com)
Date: Sat Nov 29 2003 - 21:25:43 MST
> -----Original Message-----
> From: firstname.lastname@example.org [mailto:email@example.com] On Behalf
> Of Emil Gilliam
> Sent: Sunday, 30 November 2003 11:43 AM
> To: firstname.lastname@example.org
> Subject: META: Moderation
> > I think it makes considerable sense, but I suspect that if
> I were to
> > give my honest opinion of why speculation was more tolerated than a
> > concrete discussion of computer architecture I'd become persona non
> > grata immediately.
> *Sigh*. Although I am the appointed List Sniper, the sniping I do
> reflects the tastes and whims of Emil Gilliam only, not the
> list owner
> or anyone else.
> Several red flags went up at once in the Jaron Lanier thread, and I
> became sufficiently annoyed to take action. Since several people have
> brought up valid concerns over my judgment, I have reconsidered and
> declare that the discussion of architectures is semi-tolerated again,
> but you are advised to keep the posts interesting.
> - Emil
I'd like to pipe in here with a view that Lanier's rambling should not
be under-rated in content nor should it be under-rated from a semiotic
standpoint as lacking relevance to SL4 thinking.
If you read papers on the progress of AI you find people like Rodney
Brooks saying that they are on the lookout for some youngster to come
along and set them all straight on what we cannot see. AI is in the very
strange position of being claimed a great success and a total failure at
the same time. The last thing they all talk about in summary papers is
how mysterious and hard subjective experience is and how unable we still
are to clearly state its relevance to intelligence. The mystery of the
quale. Oh my, computer science has discovered actual physical reality
and cognitive science!
Next, neuroscience: summary papers outline the mystery of the quale,
walk empirical circles all about them and decry their own impotence in
uncovering their source. Stare down the microscope, make reems of fMRIs,
they see nothing but fermionic matter and it does this amazing thing
called 'the what it is like experience' and all they can recommend is
more and more connectionist ratsnests when connectionists themselves say
'something is missing'. Grossberg has produced a basically complete
connectionist outline of vision (occipital lobe neural topology). Put it
in silicon...will the silicon have a visual experience? No-one can say.
Philosophers walk the solution landscape and have now institutionalised
the quale as 'the hard problem'. Mysterians and panpsychists and
functionalists and any number of other Xists wander around the
intractibility of this mystery. The trail is several thousand years old.
Forests of trees have been slain documenting it. In "Encyclopedia of
Cognitive Science", 2003 (A$2000.00 eeek!) we have Ned Blockage and the
"no-one has even come close to a plausible theory for qualia" cry
enshrined like a company policy. Mysterians like Colin McGinn declare
that we have a fundamental misconception about the nature of matter
(Galen Strawson's comments on him are worth a read too. And Dennett too
for that matter)
Scientific method itself is at a well documented and much discussed
boundary condition where the thing doing all the observing in a
classical 3rd person observer culture several hundred years old is
focussed on itself and found wanting.
The academic world and the whole evolution of philosophy of science has
contrived to draw our view of the universe into a particular place and
from that place the solution to the mystery (which is in the end what
Lanier is really talking about) is simply not visible.
Lanier's proposal is, if I understand him correctly, just as wrong as
whatever he criticises. Lot's of parallel crap is just a whole bunch
more crap than serial crap. Parallel in 2D, 3D, 4D, crap is still crap.
A model of a thing is not a thing! When a 'thing' is split into Program,
Data and processor substrate and the only place that any remnant
ontology exists is in the mind of the programmer, that 'thing' is gone
from the universe. I have written reams in support of this type of
argument against the era of the modellers, which started in earnest,
IMO, with Bertrand Russel early last century and must end. I believe
that the principle reason we are in the state we are in is the same
basic category as the "why does a dog lick it's privates...because it
can" joke. That's about as blunt as I can put it. We got our materials
science under control just enough to make piles of algorithm (state
machine) theory testable and we just kept going because we could and it
did wondrous things to industry. It's just not the path to intelligent
matter: this is the signs I see when I see Laniers and their ilk.
The real value of what Lanier is saying is that he is "poking the bear".
We need more people to gnaw away at conventional 'wizdom' and for it to
become more culturally tolerable to propose new ways of looking at
things. Not necessarily new things but simply the same things in new
There's another blockage as well. Einstein said "If you haven't been a
wizkid by 30 you'll never do it, das ist kaput, jah?" or something like
that. Sorry Albert. You get the idea. The stuff of Brooks' yearn
(above). Lanier describes a brief magic moment in comp sci where the
whole science could reside in one head. The last vestige of the age of
the generalists blinked out sometime in the mid 20th century. My point?
Einstein's magic novel thinking youngster, needed to solve the mystery
hiding amidst the divisions of 400 years of evolution of scientific
disciplines, needs to have more than just a foot in, say, 15 disciplines
.ie. The experience of a veteran. This is the recipe from hell for
hiding a solution.
But wait there's more! These are the epistemological steak knives. I
believe we have evolved not to find the solution. This is something I
rarely find discussed. Our place in the universe is selected through the
brutish unthinking experimentalism of the natural world and we may be
optimally chosen for masking those aspects of the universe generating
our view of it. If it were a survival advantage for the processes of
subjective experience to be transparently obvious then it would be just
that. If it is a survival advantage to obscure the processes and simply
'be' it then that is what would happen. I believe we are in the latter
So what's the point of it all? My personal view is that the whole idea
of a 'spike' and everything that this list holds dear is dependent on
the Laniers of the world and this type of thinking and it should be
welcomed and nurtured. Is SL4 part of the problem or part of the
solution? What if the type of computing theory and it's Moore's Law
driven hardware we have had fed to us for decades is the buggy whip
solution with a future that has no relevance to the utlimate outcome of
intelligent matter (this doesn't mean it's not useful! Just that it's
for a different marketplace).
I find Lanier pretty unintelligible and tire of him being wheeled out as
an verbally effulgent wunderkind. Nevertheless I'll read what he says
and walk the path and ask for more because I need those sleepy people to
wake up. Dennett and other functionalist/representationalist folks want
to wish the problem away in spite of legions of contrary punditry. But
it won't be wished away and we need Lanier-like thinking to break the
impasse. We need to listen well and not simply dismiss like Dennett did
Maybe I should post this to edge...if Steve Grand can blather on why
can't I? Hmmm. Invite only. Bugger.
This archive was generated by hypermail 2.1.5 : Sat May 25 2013 - 04:00:46 MDT