From: Woody Long (ironanchorpress@earthlink.net)
Date: Wed Jan 18 2006 - 13:43:32 MST
> [Original Message]
> From: Daniel Radetsky <daniel@radray.us>
> To: <sl4@sl4.org>
?]
>
> On Tue, 17 Jan 2006 20:11:22 -0800 (PST)
> Phil Goetz <philgoetz@yahoo.com> wrote:
>
> > My more important point is that Woody's test is untestable. We have no
> > way to evaluate whether a machine is conscious of the meaning of its
> > inputs in the same way that a human is.
-------------------
> Are you sure? Perhaps when we have a more complete understanding of the
way
> that humans are conscious of the meaning of their inputs, we will realize
that
> we can determine whether a given machine is similarly conscious. It
sounds like
> you're just making an argument from lack of imagination. If you disagree,
tell
> me why I should believe such a test is impossible, rather than
nonexistent.
> Daniel
-------------------
Bingo. This "Searle Test" is the only one for me. Show me how human level
consciousness understands (receives/processes) the incoming language, then
show me your system doing the same thing in the same way. Then, if I accept
your consciousness theory premise, I will conclude that your system is
exhibiting at least some attributes of machine consciousness. Otherwise, I
will not. The interesting thing is that I see the Pascal textual
entailment recognition Challenge Ben Goerzel mentioned as a perfect albeit
partial implementation of the Searle Test. There is a (partial) theory of
consciousness - human level consciousness is able to understand
(receive/process) incoming language so as to be able to perform textual
entailment recognition. If a system can be evaluated and seen to be doing
the same thing in the same way, then it has passed the Searle Test in this
one regard, and can be said to be exhibiting at least some attributes of
machine consciousness. (Clearly, it IS starting to understand at least some
of what the humans are saying around it.) I will not accept a Turing Test
"passer" as conscious, because it passed the Turing test.
It can in no way prove to me that the system is understanding what it is
saying like human level consciousness understands. It could easily be a
card shuffling, unaware, Searle-failing, classic period (1950-2005) digital
computer system, without any consciousness at all. So this is the test that
I will use to evaluate all MC Contenders. And since Goertzel has asked me
to respond to this very Searle Test equivalent Pascal Challenge (which I
will ASAP, as it is the key to advancing this debate), I must conclude that
he ALSO sees the value in this type of evaluation. I think maybe we could
even come to a general agreement on the type of tests we would like to
administer to MC Contenders to verify their claim of machine consciousness,
namely Searle Test types of the Pascal Challenge form, in ever ascending
degrees of difficulty.
Ken Woody Long
http://www.artificial-lifeforms-lab.blogspot.com/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT