From: Thomas McCabe (pphysics141@gmail.com)
Date: Tue Jan 29 2008 - 13:37:30 MST
Implementation/(semi)technical
* We are nowhere near building an AI.
o Rebuttal synopsis: Most technology development takes place
behind-the-scenes, in far away government and corporate labs where
most people don't see it. APRAnet, the predecessor of the Internet,
was created in 1970. In the 25 years between the first node and
Netscape's IPO, the network grew exponentially, but few if any seemed
to notice. Only fifteen years ago, HTML was in its infancy, and most
people didn't own a computer. Because we only notice the end stages of
a project, when the technology is popularized, it seems to come from
out of nowhere.
*
o True, the field of "Artificial Intelligence" has made only
moderate progress in the past thirty years. But all the building
blocks of human intelligence theory have been quietly falling into
place. Cognitive science has made huge strides. Bayesian information
theory has been popularized. Evolutionary psychology has continued to
make progress. All of these fields are vital to true Artificial
Intelligence, but their progress has gone mostly unnoticed, except by
academic specialists.
* Computers can only do what they're programmed to do. (Heading
6.6. in Turing's classic paper
o Rebuttal synopsis: Even simple programs can produce
surprising, apparently unexplainable results. We still don't
understand the behavior of a number of five-state Turing machines
(link).
* The human brain is not digital but analog: therefore ordinary
computers cannot simulate it.
o Rebuttal synopsis: A digital computer can simulate an
analog system to an arbitrarily high level of accuracy. One kilobyte
of data is enough for over two thousand digits of precision.
* Godel's Theorem shows that no computer, or mathematical system,
can match human reasoning.
o Rebuttal synopsis: Humans are also subject to Godel's
Theorem. We can't prove the statement "G cannot be proven" either.
* It's impossible to make something more intelligent/complex than yourself.
o Rebuttal synopsis: Evolution's algorithm is extremely
simple, but evolution has created creatures of fantastic complexity,
us included.
* Creating an AI, even if it's possible in theory, is far too
complex for human programmers.
o Does anyone have counter-evidence? This looks like a real
possibility. - Tom
+ There's always the "brute forcing AI via
evolutionary algorithms and such" argument, but that isn't really of
any use for designs that are supposed to be Friendly. More usefully,
one could mention that we can always develop simple software tools
that help us design more complex software tools, but that may not be
enough. The third response would be that as our understanding of
intelligence and the human brain develops, we might also develop brain
implants and such that expand our capability to deal with complexity.
Of course, that might be too science fiction for lots of readers, but
throwing a couple of links to that artificial hippocampus they've been
working on might help. Of course, with the nightmare of regulation
medical technology has to go through, really powerful brain implants
will take a long, long time to come on the market...
* AI is impossible: you can't program it to be prepared for every
eventuality. (Heading 6.8. in Turing's classic paper, SIAI blog
comment: general intelligence impossible)
o Rebuttal synopsis: With good programming and enough
memory, an AI can handle an arbitrarily large number of circumstances
- just like humans do. Evolution has equipped us with different
modules, such as ones enabling us to easily and effortlessly read the
expressions of others, and an AI can be designed with far more modules
than we humans have.
* We still don't have the technological/scientific prerequisites
for building AGI; if we want to build it, we should develop these
instead of funding AGI directly.
o Rebuttal synopsis: Any necessary prerequisites can be
funded by the AGI project directly. We still don't know what these
prerequisites are, so at a minimum, the field still needs to be
investigated until we can determine where to go next.
* There's no way to know whether AGI theory works without actually
building an AGI. (link)
o Rebuttal synopsis: Several theory components, such as
recursive decision theory, can be tested on much less complex systems.
It is true, however, that as the system gets more and more complex,
testing will become more and more difficult.
* Any true intelligence will require a biological substrate.
o Rebuttal synopsis: Biological substrates are made of
atoms, which are simulatable on any Turing-equivalent computer with
enough time and RAM.
* We can't reach the levels of computing power needed to equal the
brain using currently existing hardware paradigms.
o Rebuttal synopsis: IBM's BlueGene already exceeds many
estimates of the human brain's computing power. With nanotechnology,
we should be able to get 10^20 FLOPS on a desktop computer.
* Nobody seems to have much of a clue on how to solve the grounding problem.
o Rebuttal synopsis: The "symbol grounding problem" is an
illusion created by decades of misapplied AI work. See Artificial
Addition.
- Tom
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT