Re: 3 "Real" Conscious Machines [WAS Re: Singularity: A rock 'em, shock'em ending soon?]

From: Woody Long (ironanchorpress@earthlink.net)
Date: Tue Jan 17 2006 - 23:56:55 MST


> [Original Message]
> From: Ben Heaton <factitious@gmail.com>
> To: <sl4@sl4.org>

>
> On 1/17/06, Woody Long <ironanchorpress@earthlink.net> wrote:
> > Precisely. And that is why I have proposed the Searle Chinese Room Test
for
> > machine consciousness. The CPU must "understand" the incoming Chinese
> > it is translating via English program, and the resulting Chinese, in
the same way
> > humans do, where this is taken to mean "as it is received by human level
> > consciousness." Then (and only then) can it be called for all intents
and
> > purposes a conscious machine.
>
> So you're saying that for a machine to be conscious, it must
> understand the input it receives. Do you have an idea for a test that
> can be used to determine whether a particular machine meets that
> requirement?
>
> -Ben Heaton

Yes, for a machine to be conscious, it must (at least) understand
(receive/process) language inputs, in the same way as human level
consciousness understands (receives/processes) these language inputs.

There IS an implied test in the Searle Chinese Room task. The Searle
machine consciousness test is in the form -

1. The man in the room understands the incoming language like this i.e.,
receives/processes it as a human level consciousness like this.
2. The system in the room has been evaluated to be receiving/processing it
in this exact same way as this human level consciousness is.
-------------
3. The system has passed the Searle test - it is exhibiting some machine
consciousness.

OR

1. The man in the room understands the incoming language like this i.e.,
receives/processes it as a human level consciousness like this.
2. The system in the room has been evaluated to not be understanding
(receiving/processing) it in the same way as human consciousness does, but
to be just heuristically shuffling data cards without any understanding of
what it is shuffling
-------------
3. The system has failed the Searle test - it is not exhibiting machine
consciousness.

So the key to the test is in the theory of human level consciousness
provided (1) that is to be applied when a system is evaluated. One such
potential, applicable partial theory is the Pascal Challenge's implied
theory that human level consciousness receives/processes language in such a
way as to perform textual entailment recognition. Therefore, if a system is
evaluated to be doing the same thing, in the same way, it can be said to
be exhibiting at least SOME of the attributes of machine consciousness. Is
this not a fair statement?

So this is how the Searle Test works, as proposed in my unpublished Company
Manifesto, "The Next Computer Age Will Be The Age Of The Droids." Here also
will be found my own theory of consciousness, but this - being at the
center of my IP - must remain private until the public demonstration.

Ken Woody Long
http://www.artificial-lifeforms-lab.blogspot.com/



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT