From: Mike Dougherty (firstname.lastname@example.org)
Date: Wed Jan 18 2006 - 16:50:34 MST
On 1/17/06, Phil Goetz <email@example.com> wrote:
> Searle has elaborated repeatedly and extensively on the Chinese room
> argument in the 25 or so years since he made it.
> One of the things he says is that
> we need "brain stuff" to produce consciousness, and that the lack of
> consciousness in the computer is because it lacks a physical substrate
> with specific, but currently unknown, properties (much like Penrose'
> Or, in other words, a soul.
Are you saying that the unknown parts (purpose and function) of the human
brain is what you arbitrarily label the "soul"? If so, then the part you
are reserving for humanity is the unknown by also claiming that it is
inherently unknowable. (in agreement with Godel's Incompleteness Theorem)
You object to giving a manufactured 'thinking machine' any membership to
consciousness because you (we) would have apriori knowledge of every part of
the manufactured machine. Maybe that's a bit arrogant to believe in the
superiority of the group to which you belong solely (soul-ly?) on the basis
of what potential remains to be discovered about the group. Perhaps this
godlike state of omniscience over our creations is justified (OK, let's not
debate the specific details of god - just go with the overall concept)
I think the concern of those debating this topic on the pro
AI-are-people-too side is that the creation has the capacity to outpace its
creator. The need to 'enhance' human wetware by integrating with technology
(or by using better chemistry, or nanochem or whatever) probably comes from
the intention to maintain the perception of superiority, even at the cost of
admitting that the technology is even able to provide the enhancement.
Humans can be very irrational.
Along another line of thinking - If the "Penrose' quantum-mystery-stuff"
is essentially chaotic, then why can't it be modelled inside the machine?
If the Mandelbrot fractal can be expressed/encoded in a few dozen bytes, and
"rendered" within a given region to a specific depth, how long does the
resultant construct take to "understand" when applied to a universe where
each dimensions has meaning? Supposing a belief in the concept of an "old
soul," I suggest that the meaning of a life is to gain an understanding of a
more subtle nuance than the life experienced by a "new soul". A
computational equivalent would be for a machine to achieve an 'enlightened'
understanding of a 3x3 grid - such as the arrangement and number of each
unit square. Upon reaching this state of awareness, a new observation is
made: each unit square is composed of a 3x3 grid of smaller squares. If
our thinking machine uses recursion to examine each fractal square, will it
pop from a stack overflow - or will it somehow break the recursion by coming
to understand the nature of recursion itself? I would be interested/happy
to see a thinking machine discover recursion without having apriori
programming to defend against it.
My more important point is that Woody's test is untestable. We have no
> way to evaluate whether a machine is conscious of the meaning of its
> inputs in the same way that a human is.
Honestly, we have no way to evaluate that humans have the same way of
consciously evaluating inputs. We tend to naturally aggregate ourselves
among groups of "like minded" behavioral outcome expectation, but the way in
which each of us arrive at that outcome is subjective. I have no proof that
any idea posted to this list is any more "human" than a sufficiently
advanced AI would have be believe it to be. My point is that it doesn't
really matter - I interact with you (collectively) because I enjoy the
subjective experience of thinking about the ideas presented. (well, most of
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT