From: Woody Long (firstname.lastname@example.org)
Date: Tue Oct 25 2005 - 23:56:37 MDT
> [Original Message]
> From: Olie Lamb <email@example.com>
> >I should tell you where my interest comes from in all this. My business
> >project is to enter an android in the historic Roboprize.com Prize Fight.
> >To be entered the Rules Committee must certify that the entry is an
> >authentic android. They have asked me what parameters I would use, and I
> >said it must be driven by an SAI system and pass the Searle Chinese room
> >test. If not, it should not be considered a consciously self aware,
> >thinking, autonomous android, and be turned down.
> "Pass the chinese room test"?
> The Chinese Room example is a illustration of how the Turing Test does
> not imply Strong AI. Searle shows that an entity can react to language
> in a way that makes it appear as though it understands language, without
> actually understanding language.
> I don't know how the Chinese room example could be turned around to
> demonstrate that an AI is Strong.
Yes, I knew this would elicit a strong reaction, as it is my most radical
First, we are in agreement that, as you said, "Searle shows that an entity
can react to language in a way that makes it appear as though it
understands language, without actually understanding language." Exactly, it
is just a heuristic-driven (rules of thumb) shuffling of symbol cards or in
robotics, sensor signal cards.It has no conscious understanding of what it
is doing, which is the same thing as saying it doesn't understand language.
Now the key to the Searle Test is in the "entity" we are talking about. He
is talking about contemporary computers and their contemporay programming
methods. These are essentially heuristic-driven systems, that shuffle
cards, and include the rule crunching inference engines and all narrow AI
>From 1999 interview -
Searle -- In English I am a human being who understands English; in Chinese
I'm just a computer. Computers, therefore -- and this really is the
decisive point -- just in virtue of implementing a program, the computer is
not guaranteed understanding. It might have understanding for some other
reason but just going through the steps of the formal program is not
sufficient for the mind. ... the computer does a model or a simulation of a
process. And a computer simulation of a mind is about like computer
simulation of digestion. It's a model, it's a picture of digestion
[language understanding]. It shows you the formal structure of how it
works, it doesn't actually digest anything [understand language]!
Interviewer -- And so the computer program, then, has not explained
Searle -- That's right. Nowhere near.
Note that he doesn't say categorically that it is impossible. But he does
draw a strict, black or white line between contemporary computer
methodology, which after all is not designed to produce models of human
intelligent minds, and post-contemporary computer methodology, what is
coming to be called strong AI (SAI) computer methodologies, which might, at
least in theory, consciously understand language.
Thus we get the proposed Searle Chinese Room Test -
I. If a computer system in the room can be shown to be consciously
understanding the input language, it will be designated artificial
II. If not, it is not artificial consciousness.
In robotics, an android is the ultimate creation. It is human artificial
life. It is Star Trek's Data. As such a completely human-like creation, it
requires a fully human intelligent, humanoid SAI system. And human-like
consciousness requires a self, self-awareness, and motivational autonomy,
so to show artificial consciousness is to show humanoid SAI.
Thus in order for an android to be a consciously self-aware, motivationally
autonomous entity, it will have to be driven by SAI, and it will have to
pass the Searle Test.
Ken Woody Long
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT