Re: The GLUT and functionalism

From: Stathis Papaioannou (stathisp@gmail.com)
Date: Tue Mar 25 2008 - 00:52:18 MDT


On 25/03/2008, Lee Corbin <lcorbin@rawbw.com> wrote:

> No, because sufficiently low-level table lookups are just fine. Not
> as any kind of estimate to take to the bank, suppose me to be
> claiming that when you start looking up bit patches of 10^6 or so
> ---or in the inimitable example of a Life Board, a region 1000x1000
> ---then a very small diminution of consciousness occurs.

If consciousness is Turing emulable, then it is GOL emulable. Suppose
we have a large Life Board which is emulating a human mind,
interfacing with a camera, microphone and loudspeaker so that it has
vision, hearing and speech. The emulation is shown a picture of a dog
and asked to describe it, which it does, just as well as you or I
might. Next, a change is made to a patch of the Board so that those
squares are looked up rather than calculated. This patch is large
enough that it causes the theorised diminution in conscious
experience. The emulation is still looking at the dog and describing
what it is seeing as the change is made. What happens?

If there is a change in consciousness then the emulation notices that
the picture has suddenly gone blurry, or one of the dog's legs has
disappeared, or whatever (we can imagine that the looked-up patch
increases in size until the change in visual perception becomes
noticeable). So, as per your instructions, the emulation tries to
report this change. However, there is a problem: the squares on the
Board which interface with the loudspeaker are *exactly the same* as
they would have been if the looked-up patch had actually been
calculated. So the emulation would be saying, "It's the same picture
of a dog, try looking up a larger patch of squares", while thinking,
"Oh no, I'm going blind, and my mouth is saying stuff all on its
own!". But how is this possible unless you posit a disembodied soul,
which becomes decoupled from the emulation and goes on to have its own
separate thoughts?

The other possibility is that there is a change to visual perception
which is not actually noticed. When all the squares relating to visual
perception are looked up the emulation becomes blind, but it doesn't
realise it's blind and continues to accurately describe what is shown
to it, using zombie vision. This is almost as implausible, and begs
the question of what it means to perceive something.

The above is a variation on Chalmers' "Fading Qualia" argument:

http://consc.net/papers/qualia.html

(I might add that many cognitive scientists don't like Chalmers due to
his insistence that there is a "hard problem" of consciousness, but in
actual fact, he is mostly an orthodox computationalist, and the above
paper probably presents the strongest case for consciousness surviving
neural replacement scenarios.)

-- 
Stathis Papaioannou


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT