From: Thomas McCabe (pphysics141@gmail.com)
Date: Tue Jan 29 2008 - 13:36:24 MST
Consciousness
A general rebuttal against most of these could be that consciousness
isn't strictly necessary for AI.
* Computation isn't a sufficient prerequisite for consciousness.
o Rebuttal synopsis: Every atom in the human brain obeys the
laws of physics. These laws are very well understood, and can be
modeled on any Turing-complete computer system with enough RAM and
processing power. With sufficient resolution, you could simulate the
entire brain this way, atom-by-atom. We know that the real atoms and
the simulated atoms behave identically; the real person and the
simulated person should, therefore, also behave identically (allowing
for quantum randomness). Hence, as long as no supernatural or
spiritual elements are involved, it *must* be possible to build a
conscious entity inside a computer.
* A computer can never really understand the world the way humans
can. (Searle's Chinese Room)
o Rebuttal synopsis: This idea is mainly the result of
previous, abandoned AI projects, where (say) a cow was represented by
a single string variable, "COW". Obviously, using the word "COW" isn't
going to make the computer understand the full range of experiences we
associate with real-life cows. However, this problem is specific to
old-fashioned AI systems, *not* AIs or computers in general.
* Human consciousness requires quantum computing, and so no
conventional computer could match the human brain.
o Rebuttal synopsis: Human neurons are fairly well
understood, and so far, there's no evidence for any kind of quantum
computation within the brain. A quantum computer needs to be kept
isolated from any disturbances to avoid wave function collapse, and
any atom in the brain is bombarded constantly by photons and other
atoms.
* Human consciousness requires holonomic properties.
o Rebuttal synopsis: This is still a fringe hypothesis, and
goes against the bulk of currently-accepted neuroscience.
* A brain isn't enough for an intelligent mind - you also need a
body/emotions/society.
o Rebuttal synopsis: Humans need a body, emotions, and
society to function, but there's no real reason an AI would need them.
AIs and humans are vastly different from each other, and what applies
to one doesn't automatically translate to the other.
o Secondary rebuttal: Even an AI did need a body, there are
plenty of simulated environments where something like a body could be
provided. AIs will automatically have a society of sorts, by virtue of
interacting with the researchers building them.
* As a purely subjective experience, consciousness cannot be
studied in a reductionist/outside way, nor can its presence be
verified. (in more detail)
o Rebuttal synopsis: As in quantum mechanics, we don't need
to be concerned with unverifiable philosophies. What we can verify is
that intelligence has had a huge impact on the world, and any increase
in the level of available intelligence will have huge consequences for
the human species.
* A computer, even if it could think, wouldn't have human
intuition and so would be much less capable in many situations.
o Rebuttal synopsis: Human intuition, although it may seem
mysterious to us, is based on a physical network of subconscious
memories and observations. This network has been studied by cognitive
scientists, and there's no reason why it couldn't be programmed in if
necessary.
* If we do not properly understand feelings and qualia, we could
accidentally cause our AI systems to suffer immensely when they were
being developed.
o This is an engineering problem. It *is* true that we could
cause immense suffering if we screw up. - Tom
- TOm
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT