From: Mitchell Porter (mitchtemporarily@hotmail.com)
Date: Mon Jul 25 2005 - 20:55:55 MDT
Ben Goertzel wrote
>I tend to think that if one builds a software program with the right
>cognitive structures and dynamics attached to it, the qualia will come
>along
>"for free".
Well, if one believes in both qualia and matter, there are at least three
possibilities regarding their relationship:
1) Some form of identity theory: qualia actually are material entities of
some sort.
2) Property dualism: qualia are material entities, but they are an aspect of
matter not yet encompassed by physics.
3) Substance dualism: qualia are not material entities, they belong to some
other ontological category.
So you can see what I'm driving at, I'm going to focus on visual qualia -
visual sensations, if one prefers. First of all, we need a working
description, so I will adopt a very crude one: the visual field consists of
adjacent patches of color. I'm also adopting a substance-property ontology,
in which color is a property of patches. So if I am satisfied that there are
such things as colors, and patches of color, next I want to know, what are
they? What is their relationship to my hypothesized physical ontology? Again
for the purposes of argument, I will make the crude assumption that my
physical ontology consists of colorless particles moving in space according
to some dynamical laws.
Now (1) I have already dismissed as impossible; there simply is no such
thing as a patch of color in the physical ontology just defined. However,
there are substances ("things") with properties - the individual particles
have their dynamical properties, and sets of particles have additional
properties such as number of elements, average position, and so forth. So I
might adopt (2), and propose that a color-patch is actually a set of
particles in physical space, and that color is a new property it possesses,
and perhaps that there are new laws relating the variation of the
geometrico-algebraic properties of the sets of particles to the variation of
this additional 'color' property. Alternatively, I might feel that the space
inhabited by the color patches - the visual field - is obviously a different
space to the physical space in which the fundamental particles move, and so
I might go for (3) - a color-patch is a different sort of *thing*, whose
properties include color. I am again free to postulate laws of correlation
between particles and color-patches.
It seems to me that most people in philosophy now want to believe some
variation of (1). What was a law of correlation between two things, in
hypotheses (2) and (3), becomes a statement of identity in the context of
(1): a patch of color *is* a certain sort of set of particles in a certain
sort of state. If you are a neuroscientist, you will propose something like:
an assembly of neurons (i.e. a certain set of particles) spiking according
to a code (i.e. in a certain state, or sequence of states). If you are a
computer scientist, you may wish to say that the set of particles need only
be an "information processor", and that the relevant state need only have
certain causal relations with the states of various neighboring systems (the
environment, other information processors).
But as I keep saying, assertions of outright identity between qualia and
sets of particles are simply not viable. They at least ought to be
recognized as highly exotic metaphysical propositions, on a par with
asserting that this rock over here is 'actually' the number 2. There is,
however, another problem that continues to exist, even if identity theories
are modified so as to become dualist theories, and that is the vagueness
(underdetermination, to be more precise) of the physical description. Recall
that a law of phenomenal-physical correlation says: qualia like this occur
always in conjunction with physical states of affairs like that. But as
descriptions of physical states of affairs, neither "neuron" nor
"causal-functional state" is exact, and fundamental laws need to be exact.
(I take it as axiomatic that vagueness is only ever a property of
descriptions, but not of the things described. A particle cannot have a
position without having a particular position, for example.) A stock example
of an inexact or vague predicate is baldness. You're not bald when you have
a full head of hair; you are bald if you have none; but what if you have one
hair, ten hairs, a thousand hairs; where is the dividing line between bald
and not-bald? There is no reason to think that there is any way to answer
that question without arbitrary stipulation that, say, 1000 hairs is the
most you can have and still be bald.
But the same consideration applies to the division of microphysical
configuration space into "neuron" and "not-neuron", or even to the division
of possible states of a transistor into "one" and "zero". Fundamentally, we
are able to attribute semantic content to transistor states because of their
predictable behavior: when we act so as to put it into a "one" state, it
usually behaves subsequently as a "one" is supposed to behave. But if we
consider all possible distributions of electrons throughout a transistor,
there will clearly be marginal cases. These do not matter functionally if
they never, or hardly ever, get physically realized in our computers; but
they matter ontologically, if we are proposing a new universal law of
psychophysical correlation which purports to describe the conditions under
which qualia are realized. You have to draw an exact boundary in
configuration space and say, these are the ones, these are the zeroes; these
are the blue qualia, these are the green qualia; this is the thought of an
apple, this is the thought of a red apple. And since neurons, just like
transistors, are actually made of trillions of elementary particles, there
will be functionally marginal states, and how you divide configuration space
will have an element of arbitrariness to it.
Now, unlike scenario (1), a law of psychophysical correlation of this sort
is at least not logically absurd, but this strong element of inescapable
arbitrariness, in the details of the coarse-graining of physical
configuration space, make it very unattractive, to me at least. I think it
would make more sense to suppose that qualia are to be correlated directly
with individual exact microphysical states of something, since the laws of
correlation are likely to be much cleaner (there will be less scope and less
need for arbitrary divisions of configuration space). Thus one might look
for low-quantum-number mesoscopic quantum states in the brain, for example.
Returning to my original assertion that an AI designed according to a
functionalist philosophy is unlikely to solve these problems for us, perhaps
the key consideration is the nature of self-knowledge. I take it as given,
not just that there are qualia, but that there is awareness of qualia, and
an associated capacity to reason about their nature, and that this is what
makes phenomenological reflection possible in human beings. The
functionalist analysis of this situation involves mapping out the formal
structure of the causal relationships involved, and a functionalist
"implementation" of phenomenological reflection would involve instantiation
of that causal structure in a computational medium. However, if the
functionalist theory of *qualia* is wrong, then phenomenological reflection
would not *actually* be occurring in that computational medium, any more
than suffering occurs somewhere in the physical interior of a book, no
matter how tragic the tales it tells. And if actual phenomenological
reflection is necessary to make progress in the ontology of consciousness
and in ontology in general, then that computational medium does not by
itself have the capacity to make that progress. An "artificial philosopher"
built according to functionalist principles might be able to generate the
argument about arbitrariness that I made above, and a user might be able to
read the argument and understand it; this "self-reductio ad absurdum" is the
closest to philosophical progress that I can see coming from such a
situation. But it looks far more likely that a philosophically functionalist
AI will simply assist philosophically functionalist humans in devising new
epicycles for functionalism.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT