From: Daniel Radetsky (daniel@radray.us)
Date: Wed Jan 18 2006 - 16:48:50 MST
On Wed, 18 Jan 2006 08:09:43 -0500
Richard Loosemore <rpwl@lightlink.com> wrote:
> END OF ARGUMENT.
If you don't want to talk about Searle, don't talk about Searle, but don't give
a set of reasons why not to talk about Searle, and expect me not to respond.
> He proposed a computational system implemented on top of another
> computational system (Chinese understander implemented on top of English
> understander). This is a mind-on-top-of-mind case that has no relevance
> whatsoever to either (a) human minds, or (b) an AI implemented on a
> computer.
This is a version of a response made a long time ago by Jerry Fodor. Searle
responded, and very adequately I think. Since the mind-on-top-of-mind is
something which is implementing a Turing machine, it is the same thing
computation-wise as anything else implementing a Turing machine. So it is
completely relevant to whether or not a computer (something implementing a
Turing Machine) can be conscious.
I'll be blunt: if you want to challenge Searle, use the Systems Reply. It's the
only reply that actually works, since it explicitly disagrees with Searle's
fundamental premise (consciousness is a causal, not a formal, process). You
went on to make something like the Systems Reply in the rest of your post, but
against a straw man. Searle never claims that since 'understanding doesn't bleed
through,' Strong AI is false. He claims (in the original article; I haven't read
everything on this subject) that no additional understanding is created
anywhere, in the room or in the man, and so Strong AI is false. That is, the
fact that 'understanding doesn't bleed through' is only a piece of the puzzle.
Daniel
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT