From: Richard Loosemore (rpwl@lightlink.com)
Date: Thu Jan 19 2006 - 14:27:57 MST
Searle didn't made the claim you suggest: he was talking about the
person in the room following the procedures necessary to act as a Turing
Machine and implement by hand *any* computer program, not any specific
type of intelligent software. He would wriggle out from underneath that
attack.
I attacked Searle on a different plane.
Your last comment confuses me a little: in my response to Daniel I did
not try to defend the idea of "evidence" for consciousness.
As for the *idea* of consciousness being ridiculous .... that is another
kettle of fish entirely! I am writing a paper on the subject so I will
save my comments for when that is done.
Richard
micah glasser wrote:
> The problem with Searle' critique is quite simple - he begins with the
> false assumption that a machine can pass a Turing test with some sort of
> functionalist language table. No machine has ever been abler to
> genuinely answer questions in a fashion that would satisfy the Turing
> test using such methods. Yet Searle pretends that a machine can already
> pass a Turing test using such "card shuffling" techniques and then
> proceeds to show that the Turing test can't possibly be a genuine
> indicator of human level intelligence because it is being accomplished
> through such a trivial technique. This whole line of thinking is just
> wrong and is philosophically indefensible. It may turn out that brains
> are not UTMs (Jeff Hawkins et al) but it still stands that if a UTM can
> pass a genuine Turing test then it is necessarily as intelligent as a
> human since the intelligence of humans are measured through their
> linguistic capacity. If you presented me with 20 different interlocutors
> I could, after interviewing them all, have a very good idea of which
> were the most intelligent through how well they were able to formulate
> responses to my questions. This ability is not trivial - it IS human
> intelligence. The fact that people are still talking about Searle and
> his charlatan claims is just evidence of how philosophically illiterate
> the world has become.
>
> One more thing. In response to Daniel, If you believe that there can be
> evidence for consciousness I would love to know what that would be.
> Until I have been made aware of such a test I hold that the very idea is
> ridiculous
>
> On 1/19/06, *Richard Loosemore* <rpwl@lightlink.com
> <mailto:rpwl@lightlink.com>> wrote:
>
> Daniel,
>
> In spite of your comments (below), I stand by what I said. I was trying
> to kill the Searle argument because there is a very, very simple reason
> why Searle's idea was ridiculous, but unfortunately all the other
> discussion about related issues, which occurred in abundance in the
> original BBS replies and in the years since then, has given the
> misleading impression that the original argument had some merit.
>
> I will try to explain why I say this, and address the points you make.
>
> First, it is difficult to argue about what *exactly* Searle was claiming
> in his original paper, because in an important sense there was no such
> thing as "exactly what he said" -- he used vague language and subtle
> innuendos at certain crucial points of the argument, so if you try to
> pin down the fine print you find that it all starts to get very
> slippery.
>
> As example I will cite the way you phrase his claim. You say:
>
> "He claims ... that no additional understanding is created anywhere, in
> the room or in the man, and so Strong AI is false."
>
> How exactly does Searle arrive at this conclusion? In Step 1 he argues
> that the English speaking person does not "understand" Chinese. If we
> are reasonable, we must agree with him. In Step 2 he says that this is
> like a computer implementing a program (since the English speaker is
> merely implementing a computer program). In Step 3 he goes on to
> conclude that THEREFORE when we look at a computer running a Chinese
> understanding program, we have no right to say that the computer
> "understands" or is "conscious of" what it is doing, any more than we
> would claim that the English person in his example understands Chinese.
>
> My beef, of course, was with Step 2. The system of mind-on-top-of-mind
> is most definitely NOT the same as a system of mind-on-top-of-computer.
> He is only able to pull his conclusion out of the hat by pointing to
> the understanding system that is implementing the Chinese programme
> (namely the English speaking person), and asking whether *that*
> understanding system knows Chinese. He appeals to our intuitions. If
> he had proposed that the Chinese program be implemented on top of some
> other substrate, like a tinkertoy computer (or any of the other
> gloriously elaborate substrates that people have discussed over the
> years) he could not have persuaded our intuition to agree with him. If
> he had used *anything* else except an intelligence at that lower level,
> he would not have been able to harness our intuition pump and get us to
> agree with him that the "substrate itself" was clearly not
> understanding
> Chinese.
>
> But by doing this he implicitly argued that the Strong AI people were
> claiming that in his weird mind-on-mind case the understanding would
> bleed through from the top level system to the substrate system. He
> skips this step in his argument. (Of course! He doesn't want us to
> notice that he slipped it in!). If he had inserted a Step 2(a): "The
> Strong AI claim is that when you implement an AI program on top of a
> dumb substrate like a computer it is exactly equivalent to implementing
> the same AI program on top of a substrate that happens to have its own
> intelligence," the Strong AI people would have jumped up and down and
> cried Foul!, flatly refusing to accept that this was their claim. They
> would say: we have never argued that intelligence bleeds through from
> one level to another when you implement an intelligent system on top of
> another intelligent system, so your argument breaks down at Step 2 and
> Step 2(a): the English speaking person inside the room is NOT analogous
> to a computer, so nothing can be deduced about the Strong AI argument.
>
> So when you say: "Searle never claims that since 'understanding doesn't
> bleed through,' Strong AI is false." I am afraid I have to disagree
> completely. It is implicit, but he relies on that implicit claim.
>
> And while you correctly point out that the "Systems Argument" is a good
> characterisation of what the AI people do believe, I say that this is
> mere background, and is not the correct and immediate response to
> Searle's thought experiment, because Searle had already undermined his
> argument when he invented a freak system, and then put false words into
> the mouths of Strong AI proponents. My point is that the argument was
> dead at that point: we do not need to go on and say what Strong AI
> people do believe, in order to address his argument.
>
> In fact, everyone played into his hands by going off on all these other
> speculations about other weird cases. What is frustrating is that the
> original replies should ALL have started out with the above argument as
> a preface, then, after declaring the Chinese Room argument to be invalid
> and completely dead, they should have proceeded to raise all those
> interesting and speculative ideas about what Strong AI would say about
> various cases of different AI implementations. Instead, Searle and his
> camp argued the toss about all those other ideas as if each one were a
> failed attempt to demolish his thought experiment.
>
> Finally, Searle's response to the mind-on-mind argument was grossly
> inadequate. Just more of the same trick that he had already tried to
> pull off. When he tries to argue that Strong AI makes this or that
> claim about what a Turing machine "understands," he is simply trying to
> generalise the existing Strong AI claims into new territory (the
> territory of his freak system) and then quickly say how the Strong AI
> people would extend their old turing-machine language into this new
> case. And since he again puts a false claim onto their mouths, he is
> simply repeating the previous invalid argument.
>
> The concept of a Turing machine has not, to my knowledge, been
> adequately extended to say anything valid about the situation of one
> Turing machine implemented at an extreme high level on top of another
> Turing machine. In fact, I am not sure it could be extended, even in
> principle. For example: if I get a regular computer running an
> extremely complex piece of software that does many things, but also
> implements a Turing machine task at a very high level, which latter is
> then used to run some other software, there is nothing whatsoever in
> the
> theory of Turing machines that says that the pieces of software running
> at the highest level and at the lowest level have to relate to one
> another: in an important sense they can be completely independent.
> There are no constraints whatsoever between them.
>
> The lower level software might be managing several autonomous space
> probes zipping about the solar system and interacting with one another
> occasionally in such a way as to implement a distributed Turing
> machine,
> while this Turing machine itself may be running a painting program. But
> there is no earthly reason why "Turing machine equivalence" arguments
> could be used to say that the spacecraft system is "really" the same as
> a painting program, or has all the functions of a painting program.
> This is, as I say, a freak case that was never within the scope of the
> original claims: the original claims have to be extended to deal with
> the freak case, and Searle disingenuous extension is not the one that
> Strong AI proponents would have made.
>
>
> Richard Loosemore.
>
>
>
>
>
> Daniel Radetsky wrote:
> > On Wed, 18 Jan 2006 08:09:43 -0500
> > Richard Loosemore <rpwl@lightlink.com
> <mailto:rpwl@lightlink.com>> wrote:
> >
> >
> >>END OF ARGUMENT.
> >
> >
> > If you don't want to talk about Searle, don't talk about Searle,
> but don't give
> > a set of reasons why not to talk about Searle, and expect me not
> to respond.
> >
> >
> >>He proposed a computational system implemented on top of another
> >>computational system (Chinese understander implemented on top of
> English
> >>understander). This is a mind-on-top-of-mind case that has no
> relevance
> >>whatsoever to either (a) human minds, or (b) an AI implemented on a
> >>computer.
> >
> >
> > This is a version of a response made a long time ago by Jerry
> Fodor. Searle
> > responded, and very adequately I think. Since the
> mind-on-top-of-mind is
> > something which is implementing a Turing machine, it is the same
> thing
> > computation-wise as anything else implementing a Turing machine.
> So it is
> > completely relevant to whether or not a computer (something
> implementing a
> > Turing Machine) can be conscious.
> >
> > I'll be blunt: if you want to challenge Searle, use the Systems
> Reply. It's the
> > only reply that actually works, since it explicitly disagrees
> with Searle's
> > fundamental premise (consciousness is a causal, not a formal,
> process). You
> > went on to make something like the Systems Reply in the rest of
> your post, but
> > against a straw man. Searle never claims that since
> 'understanding doesn't bleed
> > through,' Strong AI is false. He claims (in the original article;
> I haven't read
> > everything on this subject) that no additional understanding is
> created
> > anywhere, in the room or in the man, and so Strong AI is false.
> That is, the
> > fact that 'understanding doesn't bleed through' is only a piece
> of the puzzle.
> >
> > Daniel
> >
> >
>
>
>
>
> --
> I swear upon the alter of God, eternal hostility to every form of
> tyranny over the mind of man. - Thomas Jefferson
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT