From: Daniel Radetsky (email@example.com)
Date: Wed Jan 18 2006 - 00:58:59 MST
On Tue, 17 Jan 2006 20:11:22 -0800 (PST)
Phil Goetz <firstname.lastname@example.org> wrote:
> Searle's Chinese room is not a reductio ad absurdum of semantics-free
> emulation. This is proven because, when presented with a situation in
> which the Chinese room is embedded within a robot body just like a
> human's, responding directly to sensory stimuli, Searle STILL says it
> has no consciousness.
Searle has responded to this. Any decent intro to the Chinese Room Argument
should present this. Briefly, Searle argues that the sensory stimuli is just
more input, and the responses are just more output, and it doesn't matter to
the room-in-the-robot whether the output is "Print reply X" or "Activate motors
in leg." If you haven't already, I suggest you read the relevant literature
before pinning your hopes on what is probably the worst move against Searle.
(In all fairness, the absolute worst move is probably "If we ask the computer
'Do you understand Chinese?' and it says 'Yes,' then it must understand
Chinese." But this is so stupid that it doesn't count)
> Searle has elaborated repeatedly and extensively on the Chinese room
> argument in the 25 or so years since he made it.
> One of the things he says is that
> we need "brain stuff" to produce consciousness, and that the lack of
> consciousness in the computer is because it lacks a physical substrate
> with specific, but currently unknown, properties (much like Penrose'
> Or, in other words, a soul.
"A physical substrate with specific, but currently unknown, properties" and "a
soul" sure seem like different things to me. Do you know something I don't?
> My more important point is that Woody's test is untestable. We have no
> way to evaluate whether a machine is conscious of the meaning of its
> inputs in the same way that a human is.
Are you sure? Perhaps when we have a more complete understanding of the way
that humans are conscious of the meaning of their inputs, we will realize that
we can determine whether a given machine is similarly conscious. It sounds like
you're just making an argument from lack of imagination. If you disagree, tell
me why I should believe such a test is impossible, rather than nonexistent.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT