Re: Fwd: We Can Understand Anything, But are Just a Bit Slow

From: Woody Long (ironanchorpress@earthlink.net)
Date: Fri Apr 28 2006 - 13:18:37 MDT


> [Original Message]
> From: Ben Goertzel <ben@goertzel.org>
> To: <sl4@sl4.org>
> Date: 4/28/2006 2:34:31 PM
> Subject: Re: Fwd: We Can Understand Anything, But are Just a Bit Slow
>
> To give just a hint of how these distinctions manifest themselves in a
> fleshed-out AGI design, in Novamente:

Dr. Goertzel,

As a fellow designer, I must say wow, you are well on your way. The more
posts of yours I read here about Novemente, the more impressed I am with
your development of machine intelligence.

Without reference to the analogy, perhaps you would be interested in a
phone conversation I had with Professor Searle yesterday. Here was the
heart of it -

WL: A syntactical machine can never be conscious. Correct?

PS: Yes.

WL: And semantical machines that have a semantic understanding of their I/O
are conscious. Correct?

PS: Yes. And that's the question. How do you get this semantic
understanding.

WL: Exactly! That's the key point. And my prototype of my invention is
doing just that.

PS: Then send it to me. (*In a stern tone that implied 'I will believe it
when I see it.'*)

Irregardless of whether this is true, I feel curious to ask you, though you
may not want to discuss such details, simply put: Has Novamente crossed the
great divide between syntactical, simulatory Classical Machines, and
Post-Classical strong AI conscious machines? In other words, does it have
a semantical understanding of its language inputs and outputs, or is it an
extremely clever / ingenious syntactical simulation of intelligence? Either
way, congratulations! It is solid evidence that machine intelligence is
close at hand ...



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT