Re: No More Searle Please

From: Woody Long (
Date: Thu Jan 19 2006 - 13:34:02 MST

> [Original Message]
> From: Richard Loosemore <>
> To: <>

> I will try to explain why I say this, and address the points you make.
> First, it is difficult to argue about what *exactly* Searle was claiming
> in his original paper, because in an important sense there was no such
> thing as "exactly what he said" -- he used vague language and subtle
> innuendos at certain crucial points of the argument, so if you try to
> pin down the fine print you find that it all starts to get very slippery.

Well there IS an *exactly* with all else being externally added, such as
UTM and the mind-on-top-of-mind.

Here it is clearly, in Searle's words, in his 1999 Institute of
International Studies, UC Berkeley "Conversations With History" Interview

 "Well, it's such a simple argument that I find myself somewhat embarrassed
to be constantly repeating it, but you can say it in a couple of seconds.
Here's how it goes. 
 Whenever somebody gives you a theory of the mind, always try it out on
yourself. Always ask, how would it work for me? Now if somebody tells you,
"Well, really your mind is just a computer program, so when you understand
something, you're just running the steps in the program," try it out. Take
some area which you don't understand and imagine you carry out the steps in
the computer program. Now, I don't understand Chinese. I'm hopeless at it.
I can't even tell Chinese writing from Japanese writing. So I imagine that
I'm locked in a room with a lot of Chinese symbols (that's the database)
and I've got a rule book for shuffling the symbols (that's the program) and
I get Chinese symbols put in the room through a slit, and those are
questions put to me in Chinese. And then I look up in the rule book what
I'm supposed to do with these symbols and then I give them back symbols and
unknown to me, the stuff that comes in are questions and the stuff I give
back are answers. 
Now, if you imagine that the programmers get good at writing the rule book
and I get good at shuffling the symbols, my answers are fine. They look
like answers of a native Chinese [speaker]. They ask me questions in
Chinese, I answer the questions in Chinese. All the same, I don't
understand a word of Chinese. And the bottom line is, if I don't understand
Chinese on the basis of implementing the computer program for understanding
Chinese, then neither does any other ***digital computer*** on that basis,
because no computer's got anything that I don't have. That's the power of
the computer, it just shuffles symbols. It just manipulates symbols. So I
am a computer for understanding Chinese, but I don't understand a word of
There is his precise argument in black and white text, ready for
*entailment acquisition* by our strong AIs. Forget the topic of UTM. He's
NOT talking about that, he's talking about ***digital computers*** PERIOD,
meaning all the dumb card shuffling, simulatory, non-language understanding
"classical computers" circa 1950 - 2005 including rule-based expert
systems. And what he is saying is that these ***classical digital
computers*** of his age function EXACTLY as the ENGLISH MAN in the Room
does. That is ALL he is saying - it's an apt analogy: Neither the ENGLISH
MAN nor the classical digital computer is actually "understanding the
cognitive capability" being performed, they both are performing merely as
dumb card-shuffling, simulatory, non-understanding, classical digital
computers, and can NEVER be considered strong AI conscious machines. But
hey, what do you expect from us?? It's only the first age, the classical
computer age, of 1950-2005, an age obviously for the building of the basic
tools - WP, DBMS, OS, etc. etc. - and techniques, where the mantra was 
"forget dumb, get it done." But therein lay the seed of the next computer
age, of the conscious machine with human-like intelligence and
self-awareness, of the "interpersonal computer" (IC) that we interact with
just as we do with humans, as opposed to the PCs of the classical computer
age. Hey Rome was built overnight either! We will get there and soon. But
how shall we detirmine that a system is in this new post-classical strong
AI conscious machine product class. You tell me. This is the issue of this
"3 Real CM" debate which I find very interesting for my work. 
You have proposed a Question Challenge as a way. Ben Goertzel has proposed
a Pascal Challenge as a way. Both of these are Searle Tests for me. You are
both saying "a CHINESE MAN (human-level consciousness) can understand the
incoming language can of the task and perform it. How about your so-called
machine consciousness that equals human-level consciousness? As such should
it not be able to do these tasks as supporting evidence that it is in fact
a strong AI conscious machine? Or is your system in fact just another dumb
card-shuffling, simulatory, non-understanding (of input language),
classical digital computer system or ENGLISH MAN in the Room, that fails
the Searle Test, and will never be considered a strong AI conscious
machine. So both of you are giving me a "Searle Test" (as formulated), and
since I believe in this evaluation/testing methodology I will be glad to
look into it and respond to these Challenges ASP. But first I want to
hammer out the nature of the evaluation test you have given me, and what we
can agree to about them, in posts like these.
So we get the Searle argument boiled down to a single proposition -
A simulatory, dumb card-shuffling classical digital computer with no
understanding of the cognitive task it is performing, can never be
considered a strong AI conscious machine.
On this one aspect of the Searle Argument, I believe you and I will agree. 
Ken Woody Long

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT