Re: What are qualia...

From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Sun Jan 23 2005 - 15:49:01 MST


Mitchell Porter wrote:
> And what is the "mystery" that is being "reified"? Oh, just
> that half of what we experience in sensation (the "secondary
> qualities") DOES NOT EXIST in the world-according-to-physics!

The mysterious question is 'what are qualia?'. The non-mysterious
question that we should be asking is 'why do people think they have
qualia'? The non-mysterious answer to the second question, which we
don't have yet, would be a description of your cognitive processes
causally complete enough to explain (in terms of neuron firings) why
you asked the first question. This would combine a description of how
the cognitive processes we reflectively categorise as various sorts of
qualia (the 'neural correlates of qualia') contribute to human cognition
/and/ an explanation of how the process of examining this reflective
perception in detail derails into dualistic, subjectivist fantasies.
Like many materialists I have my own guesses on possible inconsistencies
in the brain's representational schemes (themselves embedded in neural
machinery generated by pressure not to represent reality consistently,
but to survive the paleolithic environment) that would account for
this. However we won't be able to answer this with any certainty until
we get much better data.

> What we see as "red", for instance, is really just a colorless
> arrangement of corpuscles, which, by their particular size,
> shape, and motion, have the power to produce in us the
> sensation of redness.

For example in this particular confusion, people have taken the
brain's heavy bias towards (and optimisation for) object/property
representational schemes and stretched it way past breaking point.
As Eliezer would say, there is no redness out there in reality
because the sensation of redness is a feature of the (neural) map,
not the territory. The reason we can't see this is directly is
because our cognitive architecture doesn't have a clean separation
between reality map, reflective map and processing mechanisms.
Have a look at LOGI's stuff on concept kernels to get an idea of
what the 'qualia' actually do (bearing in mind that the brain is
much messier than LOGI), and why they seem so much richer than a
simple 'photons between wavelength X and Y' constraint.

> rejecting Searle's point that "reference", self- or otherwise,
> does not exist in a purely physical system, any more than a
> pattern of black and white markings inherently means anything.

Physics doesn't have reference, it has correlation. What we call
reference is a pattern of correlations that we find useful in
some specific way (i.e. we can reliably cause other people to
manipulate a bit of their map that tracks roughly the same bit of
reality as a piece of our map).

> So what is the alternative? One can begin by trying to
> perceive the extent to which one's own perception of
> the world is actually the result of imagination.

Welcome to 'active perception'. Join the queue on the right to
receive your free innoculation against semantic net infatuation.

> But can you also "see" that "seeing color A" and
> "neurons firing" are also very different things.

They're both maps of the same bit of reality, just with
differing detail and level of indirection.

> and any theory which ends up denying the very existence
> of that starting point is on the wrong track.

If we could reliably predict the subjective experience that
will result from any arbitrary combination of direct neural
stimuli, would that be good enough for you?

> The immediate challenge is just to develop an adequate
> description of consciousness - a phenomenology.

We tried doing that with just reflective data for a few thousand
years and it didn't work. The experimentalists say that they'll
have the answer after a bit more progress on instrumentation and
a lot of computer time. It seems reasonable to wait a few paltry
decades (CRNS) and give them a chance to deliver on that. Of course
IMHO we can't afford to wait.

> I want to raise just one more issue, and that is the peril of
> creating AI - especially "self-enhancing" and "Friendly" AI -
> when the nature of consciousness and physical reality is not
> yet understood.

Robust FAI designs have to be able to cope with the possibility
that our ideas about most things are wrong. This is implied by
the fact that nothing has a starting prior of 1.0 or 0.0. CV is
unusually demanding as an FAI theory in that it requires a
complete theory of human cognition to execute, but that theory
does not necessarily have to be developed by humans.

> It would appear that with AI, we are not re-creating
> consciousness; we are instead creating the best illusion
> of it that we can, while operating within the physicalist
> framework - and then buying into the illusion.

It's true that no-one knows how to exactly replicate the causal
system that produces human conscious experience at the moment.
There are various guesses about how to do it and some people are
going ahead and trying anyway. Presumably if you believe
consciousness is extra-physical in some way you don't accept
that an upload of a person would be conscious (if so, at what
point does consciousness cease if we replace natural neurons with
artificial ones one by one?).

> Even without a Singularity, it looks like we will be sharing
> the world with entities which are genuinely not conscious but
> which *can* pass the Turing Test.

This I agree with, in that (a) it should be possible to simulate
the output of a human-style introspective system without actually
instantiating the process in such a way as to generate conscious
experience (this is required if we want to implement CV without
creating hordes of imprisoned sentients) and (b) under most
circumstances it is both simpler and safer to build AGI this way.

> What I *do* think is unlikely, is that we will build super-AIs
> on the basis of a wrong theory of the mind's place in nature,
> which will then magically acquire the capability to discover
> our mistake.

It's certainly possible; AIs should be much better researchers than
humans and if necessary one could trace human brain pathways and
work out why we perceive qualia without being hindered by our
countless inbuilt delusions about how we work. Even if there is
causally significent incomputability lurking in there somewhere, an
FAI should have no problem finding it (though of course building an
FAI in the first place is very, very hard).

> If we do this, we will more likely just end up *replacing* real
> mind with pseudo-mind, throughout nature.

This is a real issue, but it's one of preserving a preffered type of
cognitive architecture despite the fact that it's inconsistent and
inefficient, not anything mystical or extraphysical.

 * Michael Wilson

        
        
                
___________________________________________________________
ALL-NEW Yahoo! Messenger - all new features - even more fun! http://uk.messenger.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT