Re: 3 "Real" Conscious Machines [WAS Re: Singularity: A rock 'em, shock'em ending soon?]

From: Woody Long (ironanchorpress@earthlink.net)
Date: Tue Jan 17 2006 - 21:12:34 MST


> [Original Message]
> From: Phil Goetz <philgoetz@yahoo.com>
> To: <sl4@sl4.org>
>
> This is not a variant of the Turing Test. The point of the
> Turing Test was NOT to define a way to know if a computer
> was intelligent. The point was that properties such as
> intelligence are defined, observed, and ACKNOWLEDGED
> -- OBSERVATIONALLY. The point was that there is no way
> to know what "consciousness" should look like, or what
> sort of circuit implements it, and so the best one can do
> is say that if it acts like a person, it's a person.

Agreed: "If it acts like a person then it's a person." However, things are
not as black and white as you make it seem.

It is recently the Turing Test that is coming under fire for being a
meaningless non-test for a conscious machine. A good analysis of both
Turing and Searle can be found at http://www.consciousentities.com/. In
their article "The Loebner Prize" Oct 2005 they say -

"I believe serious AI researchers have, on the whole, tended to stay away
from the Loebner (it seems that in 1995 Marvin Minsky offered a "prize" of
$100 to anyone who could make Hugh Loebner desist from holding the
contest), but it has also had support from serious intellectuals. Ned Block
appears to have been one of this year's judges; Daniel Dennett chaired the
panel during some of the early years (but eventually withdrew when he could
not get agreement to his plans, which would have seen a number of more
'serious' AI challenges introduced as preliminaries to the main event).
It's certainly an entertaining event - sometimes the transcripts of the
conversations have a demented but irresistible inadvertent humour about
them - but I wonder how Alan Turing would have felt about it. Nowadays the
contest rather underlines the failure of Turing's prediction that we should
have conversational computers by the end of the twentieth century.
Personally, I think the other two points which come across most strongly
from the event are the continuing weakness of the chatbots and the
unserviceable qualities of the Turing test itself."

... that's the whole problem with the Turing test principle. If you find a
group of people who want to believe the computer is talking sensibly, and
they make enough allowances for it, you can easily get a positive result. A
program which just bats back people's own input in the form of questions,
like Joseph Weizenbaum's famous Eliza, is quite capable of fooling some
people. On the other hand, if you have a skilled forensic examination, it's
always going to be possible to find inconsistencies in the conversation of
any human-like entity which hasn't actually lived a genuine human life. So
what's the point?

And further, from http://www.consciousentities.com/stories.htm#turing we
find:

[Turing] thought that by the end of the twentieth century a computer would
be able to fool an average respondent during several minutes of apparently
ordinary conversation. The really controversial claim, however, was that
this kind of test could establish that a computer was, or at least deserved
to be treated as, conscious.

The weak form of this claim (that if something seems to be conscious we
might as well treat it as if it were for the time being) is hard to argue
with, but not particularly interesting. Against the stronger form (that
things which pass the test really are conscious), it can be argued that
what makes someone conscious is not their external behaviour, or
specifically their ability to hold an intelligent conversation, but what
goes on inside their heads.

Do their responses spring from a **real understanding of the
conversation?** In response, supporters of the test might ask how we know
anyone is conscious other than by deductions based on the intelligence of
their behaviour (conversational behaviour being an especially demanding
variety).

The real problem with the Turing test is that it doesn't work. Suppose we
got incoherent gibberish through the teleprinter, or the words 'What are
you talking about?', or a string of Xs, or nothing, every time. Would that
prove that there wasn't a stupid or angry human being on the other end? Or
suppose we get perfect, sophisticated answers to our questions. Does that
prove they aren't a set of pre-recorded answers being selected by a cunning
but witless algorithm, or by a long run of good luck? No, and no. For a
test of this kind to work, there would have to be a question which human
beings invariably answered one way and computers invariably answered
another. Clearly there is no such question. Really the whole thing is a
misapplication of Leibniz's Law ."

>
> What Searle was proposing - and what Woody is proposing -
> is exactly the sort of meaningless non-test that Turing
> was objecting to, where one looks at a system and tries
> to determine, by intuition about its operation, whether
> it is conscious. This is futile. Woody has not proposed
> any test that can be carried out by a human.
>

Actually, Searle implied it in his Chinese Room story. Find a clear
description of Searle at
http://www.consciousentities.com/stories.htm#chineseroom

Here is a snippet -

Searle has no problem with the idea that some machine **other than a
[classic] digital computer [let's call it a post-classical droid machine]
might one day be conscious**: he accepts that the brain is a machine,
anyway. The practicalities of diagnosing consciousness are not the issue;
the point is what it is you are trying to diagnose. Of course Searle is not
impressed by the mere combination of arguments he has rejected
individually. Simulating a brain is no good; a simulation of rain doesn't
make you wet: you could simulate synapses with a system of water pipes
which the man in the room controls: just as obviously as in the original
example, he still doesn't understand the stories he is asked about. Using
the outputs to control a robot rather than answer questions makes no
difference and adds no understanding. It seems highly implausible to
attribute understanding to an arbitrary 'system' made up of the conjunction
of the man and some rules. If necessary, the man can memorise the rules:
then the whole 'system' is in his memory, but he still doesn't understand
the Chinese. [So these systems must be ruled out as conscious machines]

So Searle is on to something here. If the CPU/ CPU system CAN understand
the incoming language as a human does, in more precise words,
"receives/processes the incoming language in exactly the same way as human
level consciousness receives/processes it, then it CAN be said to be a
conscious machine. To EVALUATE a system therefore we need to know first how
human level consciousness receives/processes incoming language. Then we can
check to see if the system is doing the same thing. This is the implied
Searle Chinese Room Test in its essence.

We can see an application of it in the Pascal Challenge that Ben Goertzel
asked me to respond to (and I will ASAP). The idea of this challenge is
that **human level consciousness can receive/process incoming language and
understand it so as to be able to perform textual entailment recognition.**
If this is a basic essential attribute of human level consciousness or a
developed skill is open to question. However, the Pascal Challenge does
show us - in general anyway - one way human level consciousness
receives/processes incoming language, and so it is fair to ask builders of
machine consciousness if their conscious machine can understand the
incoming language in the same way as the human level consciousness does.
And as the Pascal Challenge people know, this version of the Searle Test
can be EVALUATED.

 - Ken Woody Long
http://www.artificial-lifeforms-lab.blogspot.com/



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT