From: Elias Sinderson (firstname.lastname@example.org)
Date: Fri May 28 2004 - 13:00:40 MDT
John Clark wrote:
>[...] We just have to take it as an axiom that if something is intelligent then it's conscious.
Works for me, and I'm really not interested in going down the 'what if
it's such a good simulation that we can't tell the difference' rat hole.
So, if we admit the above as an axiom, how does it jive with our other
observations and theorems regarding the conscious experience and all
that comes with it?
My own thinking on the subject actually admits of a spectrum of both
intelligence and consciousness, in a roughly ordered continuum from the
most primitive information processing life forms to the most complex.
Thus, questions regarding whether a dog, for example, has consciousness
can be quickly dispatched with the answer 'yes, but they are not as
conscious as humans'. In addition to the degree of consciousness, this
perspective allows for different kinds of intelligence (as information
processing) and, hence, consciousness as well. From the above
assumptions, it is reasonable to conclude that an animal which sees
different wavelengths of light than humans do, thereby processing
information about the physical world that humans do not have direct
access to with our natural senses, has a different form of consciousness
than humans do - less in some regards, but more in others.
Ben Goertzel replied:
>I'm not sure. That's a good rule of thumb for humans rightnow, but in [the] future there may be highly "intelligent" systems (in some senses of the word "intelligence") that both claim and appear not to be conscious. I guess that the current notion of "intelligence" is too coarse to deal with the variety of "intelligent" systems that are going to appear in the future.
Okay, now consider that in terms of what I wrote above - I think you'll
find that it meshes nicely. :-)
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT