From: Mitchell Porter (mitchtemporarily@hotmail.com)
Date: Sun Jan 23 2005 - 13:23:41 MST
Eliezer says:
>Qualia are the modern version of a very, very old mistake, that of reifying
>a mystery as a substance. Cognitive phlogiston.
And what is the "mystery" that is being "reified"? Oh, just that half
of what we experience in sensation (the "secondary qualities")
DOES NOT EXIST in the world-according-to-physics!
This is what we, the human race, get for repressing the fact that
even space, time, quantity and form (the "primary qualities") come
to us via consciousness. We work only with those because we have
theories for them (arithmetic, geometry, analysis); we extend them
in various ways (extra dimensions, subatomic and cosmological scales,
quantum wavefunctions) until we have "theories of everything";
and then we attempt to live as if *that* is reality, an act which
requires a person to ignore: (i) the secondary qualities in particular;
(ii) the whole nexus of problems relating consciousness, being, and
appearance, in general; (iii) one's own existence and self-awareness,
as something beyond a "narrative center of gravity" (if, with Dennett,
one carries out the exercise to its ultimate conclusion).
Here is a serviceable statement of the problem, the first thing
that Google returns for "secondary qualities":
"Secondary Qualities include qualities of color, odor, smell, and
taste. According to Locke and Descartes, there is nothing in
the world corresponding to our ideas of these qualities. What
we see as "red", for instance, is really just a colorless
arrangement of corpuscles, which, by their particular size,
shape, and motion, have the power to produce in us the
sensation of redness. Berkeley wanted to put secondary
qualities back into real objects, and to thus collapse the
distinction between these qualities and primary qualities."
http://www.sparknotes.com/philosophy/3dialogues/terms/term_18.html
The problem, of course, is that the brain is also "a colorless
arrangement of corpuscles", and if corpuscles outside the
brain can't be red, why are things any different on the inside?
The twentieth century, thanks to behaviorism and information
theory, was able to add a new epicycle to the Lockean
ontology: "secondary qualities" as cognitive events, neural
classifications of stimuli. Photons of a certain wavelength
arrive at the eye; they are detected by a particular subset
of sensory neurons; and the brain has learnt to associate
that stimulus-pattern with a particular word, "red".
The principal divide in academic philosophy of mind today
seems to be between those who find this sufficient, and
those who think that the existence of a "feeling of redness"
somewhere in the stimulus classifier needs somehow to be
accounted for. Thus functionalism, emergentism, aspect
dualism, and probably many other isms I've never heard of.
Hofstadter's GEB is engaged in a similar game, trying to
explain self-consciousness as emergent from "indirect
self-reference" - rejecting Searle's point that "reference",
self- or otherwise, does not exist in a purely physical
system, any more than a pattern of black and white
markings inherently means anything. (Actually, this is
further than Searle goes; he believes his own argument
when it's applied to computers, but thinks that brains
must be special somehow, because he doesn't question
the "naturalist" dogma that everything fundamentally
reduces to primary-qualities-only physics.) And of course,
naturalist philosophers have been busy producing theories
of meaning ("externalism") which attempt to reduce
semantic relations to causal relations. We *will* get the
peg into that hole, dammit!
So what is the alternative? One can begin by trying to
perceive the extent to which one's own perception of
the world is actually the result of imagination. Find
something that has several colors, and look at a
boundary between two differently colored regions.
You're seeing color A, color B, and somehow you're
also "seeing" that A and B are different. You may (if
you're a computational neuroscientist) have learnt to
think of all this in terms of neural registers and
computations - color A means the neurons are firing
one way, color B firing another way, perception of
difference means a third group of difference-detecting
neurons are also firing.
But can you also "see" that "seeing color A" and
"neurons firing" are also very different things, and
that it is a very peculiar metaphysical hypothesis to
propose that one is the *very same thing* as the
other? It is about as odd as saying that a pebble you
just picked off the ground is really the number 23.
Like the peculiarities of religious and philosophical
metaphysics, it is a mode of thought that comes to
a person only if they train themselves to develop it.
I could come up with further exercises in self-awareness
for physicists and cognitive scientists, but I will leave
their design as a meta-exercise for the reader. Their
point is to demonstrate that there is a prior way of
experiencing the world, that this other way is the
epistemological starting point, and any theory which
ends up denying the very existence of that starting
point is on the wrong track.
The immediate challenge is just to develop an adequate
description of consciousness - a phenomenology. The
greater challenge is to develop a theory of reality - an
ontology - which genuinely includes everything in that
phenomenology. A large part of philosophy is nothing
but the systematic attempt to address these two tasks.
It took centuries to produce mathematical physics, so
it may take enormous effort and many mistakes to
produce phenomenology and ontology of equal rigor,
but there is no reason to think it impossible.
I should say something about the "pitfalls" of solipsism,
metaphysical Idealism, dualism, and phenomenalism
a la Copenhagen Interpretation, but I don't have it
in me at the moment. I want to raise just one more
issue, and that is the peril of creating AI - especially
"self-enhancing" and "Friendly" AI - when the nature
of consciousness and physical reality is not yet understood.
It would appear that with AI, we are not re-creating
consciousness; we are instead creating the best illusion
of it that we can, while operating within the physicalist
framework - and then buying into the illusion. Even
without a Singularity, it looks like we will be sharing
the world with entities which are genuinely not conscious
but which *can* pass the Turing Test. They could become
like those flowers which evolve to resemble attractive
insects, ready-to-mate; partners in a symbiosis based on
a human illusion. *With* a Singularity, well, the flowers just
turn into Triffids and don't need us any more.
I don't think the universe has protected us in advance
from this novel doom, any more than it has protected
us very much against the possibility of WMDs; but I think
that at certain points on the path between here and
there, there will be openings to alternatives. Perhaps
when we are trying to train baby AIs into intellectual
self-sufficiency, we'll get responses that intuitively seem
wrong, even if we can't formally explain why they are
wrong (because we have only formalized the physical,
and not the phenomenological), and we'll go back to
the drawing board. Perhaps pseudo-philosophical
interaction with AIs on the cusp of human equivalence
will help us formalize phenomenological ontology. There
remains the possibility that quantum biology will make
us think differently about the natural brain and
therefore about artificial ones, although I maintain that
the revolution will not be complete until we have a new
ontology of matter and consciousness, and not just a
new model of biophysics; otherwise we'll just get new
"quantum naturalisms".
What I *do* think is unlikely, is that we will build
super-AIs on the basis of a wrong theory of the mind's
place in nature, which will then magically acquire
the capability to discover our mistake. If we do this,
we will more likely just end up *replacing* real mind with
pseudo-mind, throughout nature.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT