From: Mitchell Porter (firstname.lastname@example.org)
Date: Thu Jul 28 2005 - 20:43:35 MDT
Michael Wilson wrote:
>Mitchell Porter wrote:
> > However, epistemologically, qualia are the starting point, and
> > the material universe is the thing posited.
>I read this section as; 'qualia are what we call the primitive data
>structures which all of our consciously accessible sensory data is
This discussion requires a descriptive vocabulary for consciousness that
does not presuppose materialism. We cannot have it if at the first
opportunity you wilfully reinsert the very assumptions I wish to proceed
without. Already I am driven to talk about qualia because 'sensation' has
long since been implicitly redefined to mean 'neural representation', and
therefore a state of a material system by definition. In his day, Bertrand
Russell could puzzle about the relationship between sensation and the
physical world, but if I had written 'Sensations are the starting point,
physics is the thing posited', it apparently would have been read as a
statement about neural data structures.
In a previous discussion in January, when I brought up qualia, you wrote:
"The mysterious question is 'what are qualia?'. The non-mysterious question
that we should be asking is 'why do people think they have qualia'?" Clearly
this creates a dilemma for me. No matter what new words I introduce -
qualia, raw experience, first-person experience, the phenomenal world - they
will either be dismissed as designating a non-entity, or interpreted in a
way that presupposes materialism. Yet I think we do have the potential to
communicate, since you write below of people seeing that "their mental
representation of physics is nothing like their mental representation of
pain", which I read as ;-) a statement, in representationalist idiom, of the
phenomenological fact demonstrating that materialism about the mind is a
So there are two issues on my agenda here. One is the development of a
phenomenological discourse and practice which goes as far as it can without
speculation, including speculation about neural and computational
substrates. The other is the critical examination of the prevailing
ontological hypotheses about mind, such as identity theory and property
> > I ask you to conceive of point particles possessing a specified
> > mass, a specified charge, a specified location, and no other
> > properties. Sprinkle them about in space as you will, you will not
> > create a 'sensation of color'. Equip them with a certain dynamics,
> > and you may be able to construct an 'environment with properties'
> > and a 'stimulus classifier'; name some of those environmental
> > properties 'colors' and some of the classifier's states 'sensations
> > of color', and you may be able to mimic the apparent causal
> > relations between our environment and our sensations of color; but
> > the possible world you have thereby specified does not contain
> > sensations of color as we know them, and therefore cannot be the
> > world we are inhabiting.
>I'm not clear if you're building a classifier out of the particles
>and embedding it in the universe, or tacking something on to the base
>physics. I agree that the latter wouldn't work anything like our
>universe, where secondary properties are entirely higher-level
>regularities in the group of particles that constitute the perceiver.
>But in either case, if you immitate /all/ the causal properties of
>the human concepts of colour (including intermeidate sensory
>processing details that affect our conscious reasoning but which we
>can't clearly describe), how is that not sensations of color?
>Substrate indepedendence includes both the internal details of
>black-box lower level algorithms (assuming they don't systematically
>effect the output) and the details of whatever physics you are
>implemented it. The more coherent proposals of the people who want
>qualia to be part of physics are superficially plausible because in
>principle qualia could really work as ontological primitives; it's
>just that there's overwhelming evidence that our universe does not
>work like that.
The classifier is to be conceived of as, indeed, a high-level structure in a
possible world with the base physics specified. In fact, feel free to think
of it as a proposed physical model of human cognition, with string theory or
the Standard Model as the base physics. When a person asks themselves,
'Could that be the way the world is?', what happens? They take their own
experience, they take the model, and they ask if there's something in the
model for everything in their experience. You have suggested that intuitions
of dualism arise merely because 'physics' and 'pain' are apprehended through
different modalities, whereas I say there is an intrinsic mismatch between
the two objects of apprehension, independent of modality.
Most of the rest of this message is a game of 'is not, is too'. But we may
be able to get somewhere by focusing on the interplay between phenomenology,
epistemology, and ontology of mind; the analogous interplay between
cognitive science, computer science, and physics-based neuroscience; and the
relationship between those two. I think the latter is a subset of the
former, and needs to be pursued in that context, but doing this properly
requires a philosophical fluency that is difficult to build up (it's taking
*me* a long time to get it, at least).
> > We are faced, not just with a self-denying sensibility which wishes
> > to assert that colorless matter in motion is all that exists (in
> > which case the secondary properties - the qualia - are either
> > mysteriously identical with certain unspecified conjunctive
> > properties of large numbers of these particles, or even more
> > mysteriously do not exist at all),
>People (well, most of them) don't go around saying that 'thoughts do
>not exist' because they're happy with the idea that thoughts are
>inside their head, and that they exist as patterns that the basic
>elements of their brain (e.g. synapses, activation spikes) adopt. The
>confusion about qualia exists because people intuitively believe that
>qualia are 'out there' rather than 'in here'. This is sensible for
>normal reasoning; it saves a deference to just store 'the bus is red'
>rather than 'there is an object that reliably causes the red detector
>to fire'. As usual though direct intuition is worse than useless -
>actively misleading - when trying to unravel how human cognition
But there is no actual redness inside the classifier any more than there is
outside it. The 'firing of the red detector' is a different thing, and
people should not be happy with the idea that thoughts are patterns of
brain-elements. As I said previously, this is a metaphysically exotic
assertion of identity, "on a par with the assertion that this rock over here
is 'really' the number 2".
> > Mathematical physics, as we know it, is both an apex and a dead
> > end. No amount of quantitative predictive progress through better
> > model-building is going to explain consciousness, because the
> > models in question exclude certain aspects of reality *by
> > construction*.
>I disagree, assuming you allow the search for and use of higher level
>regularities (and hence all parts of the unified causal model other
>than the physical foundation). I have yet to hear any coherent
>question about consciousness on this list which cannot be irrefutably
>answered given enough research and computer time (though that amount
>may well be sufficient that it won't be so answered any time soon).
Again, this philosophy rests on the viability of identifying consciousness
with a "high-level regularity".
> > But I do regard it to be a kind of arrow to be shot at physical
> > reductionists - which is to say people who believe that talking
> > about brain states is the same thing as talking about mental states.
> > There is something which pain is like which is not described by
> > physics equations, even if physics equations can account for the
> > progress of the state of the world.
(That wasn't me, that was Chris Capel.)
> > A stock example of an inexact or vague predicate is baldness. You're
> > not bald when you have a full head of hair; you are bald if you have
> > none; but what if you have one hair, ten hairs, a thousand hairs;
> > where is the dividing line between bald and not-bald? There is no
> > reason to think that there is any way to answer that question
> > without arbitrary stipulation that, say, 1000 hairs is the most you
> > can have and still be bald.
>I don't think you need to be so concerned about using exact
>predicates. Fuzzy predicates are a direct consequence of the fact that
>tractable classifiers are usually unreliable, particularly on human
>neural hardware; the probability of returning 'bald' on seeing a head
>varies smoothly from 'nearly 0' at some lower threshold of hair to
>'nearly 1' beyond a higher threshold. I agree that exact predicates
>are usually preferable when constructing a precise theory, but we have
>a /long/ way to go before we're at that point with subjective
>sensation. Right now the terms people are using have much more serious
>clarity issues than being based on simple probabilistic classifiers.
My point is that when you look at the *metaphysics* of identity theory
(or even of functionalist dualism, such as David Chalmers seems to espouse),
you find that it rests on fuzzy predicates. Computational states are
with microphysical states by a fuzzy coarse-graining of physical state space
which, to be made exact, requires the exact specification of boundaries,
and that step is where the arbitrariness enters.
> > But if we consider all possible distributions of electrons
> > throughout a transistor, there will clearly be marginal cases.
>We solve this by adding a third 'undefined' state. We set the
>thresholds for '0' and '1' such that all states in those categories
>result in predictable behaviour. All marginal cases and in practice
>some predictable states for which the distance to the threshold is
>within the range of measurement error go in the 'undefined' category.
But then the arbitrariness comes in the setting of thresholds.
> > I take it as given, not just that there are qualia, but that there
> > is awareness of qualia, and an associated capacity to reason about
> > their nature, and that this is what makes phenomenological
> > reflection possible in human beings.
>Since the human brain is an evolved structure, do you believe that
>natural selection discovered qualia or invented them? Either way, if
>you think qualia are indivisible primitives please answer the
>question 'what use is half a qualia' ?
Ontologically, I think the existence of qualia in our world is necessitated
by the weak anthropic principle. That is, if you need qualia to have
conscious observers, then a-priori there are qualia here. But this doesn't
tell you whether everything is a quale (panpsychism) or just some things
(e.g. as in dualist scenarios of 'strong emergence'), and therefore it
doesn't tell you whether they were 'discovered' or 'invented'.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT