From: Thomas McCabe (pphysics141@gmail.com)
Date: Tue Jan 29 2008 - 13:35:28 MST
Technological progress and exponential growth
[edit]
Extrapolation of graphs doesn't prove anything. It doesn't show that
we'll have AI in the future.
Certainly, there is no certain evidence that AI will be developed in
the near future. However, an increase in processing power, combined
with improved brain-scanning methods, seems likely to produce
artificial intelligence in the near future. Molecular nanotechnology,
in particular, will enable massive amounts of processing power, as
well as a thorough mapping of the brain. Even if it didn't become
available, more conventional techniques are also making fast progress:
by some estimates, the top supercomputers of today already have enough
processing power to match the human brain, and machines of comparable
potential are expected to become cheaply and commonly available within
a few decades. Projects to build brain simulations are currently
underway, with one team having run a second's worth of a simulation as
complex as half a mouse brain, and IBM's Blue Brain project seeking to
simulate the whole human brain.
Progress made towards reverse-engineering the brain will also help AI
research by making researchers themselves more intelligent: for
instance, IQ tests seem to measure working memory capacity (Oberauer
et al, 2005). As the neural basis for different working memory
capacities become clear, it might become possible for us to increase
our own intelligence directly. Even if this wasn't possible,
algorithms extracted from the brain can be applied to traditional
computer systems, making them more effective at helping us conduct
research.
Even if we exclude the possibility of artificial intelligence by brain
reverse-engineering, increasing amounts of processing power are likely
to make it more easy to create AIs by evolutionary programming. The
human mind was never designed by anyone - it evolved through genetic
drift and selection pressures. It might not be strictly necessary for
us to understand how a mind works, as long as we can build a system
that has enough computing power to simulate evolution and produce an
artificial mind optimized to the conditions we want it to perform in.
Combining this with advances in cognitive science and traditional
artificial intelligence techniques, there's a very
While nothing is ever certain, these factors are certainly heavy
enough to make the issue worth our attention.
[edit]
Kurzweil's graphs for predicting AI are unrealistic.
The case for believing that AI may be near does not depend on Ray
Kurzweil's predictions. For the actual reasons, see "Extrapolation of
graphs doesn't prove anything".
* AI is just something out of a sci-fi movie, it has never actually existed.
o Rebuttal synopsis: rockets flying to the moon were just
something out of sci-fi books up to 1969.
* Big changes always seem to be predicted to happen during the
lifetimes of the people predicting them.
o Rebuttal synopsis: Even if the Singularity takes thousands
of years, it's still a worthwhile goal for the human species, and we
need to pursue it.
* The Singularity is the Rapture of religious texts, just dressed
in different clothes to appeal to proclaimed atheists.
o Rebuttal synopsis: Unlike any of the various Raptures, the
Singularity is a technological event, caused by ordinary humans
following the ordinary laws of physics. It does not involve any
religious or diving powers. It doesn't involve outside intervention-
it will only happen when we go out and make it happen.
* Moore's Law is slowing down.
o Rebuttal synopsis: The original Moore's Law, for number of
transistors on a chip, has continued into 2007 and 2008 (see Intel's
website), and engineers expect it to continue for at least another
decade.
* Progress on much simpler AI systems (chess programs,
self-driving cars) has been notoriously slow in the past.
o Rebuttal synopsis: Most of the successful AI software is
not called "AI software", and is used by corporations or programmers
instead of individual consumers. An industry-wide survey would be
required to see how much progress has actually been made in narrow AI
overall; to my knowledge, no such survey has ever been done.
* There could be a war/resource exhaustion/other crisis putting
off the Singularity for a long time. (See Tim O'Reilly's first comment
in the comments section)
o Rebuttal synopsis: With very few exceptions, the past few
centuries have seen exponential technological growth, and a
corresponding increase in the general standard of living. It would
require a *huge* disruption to halt this progress; even WWII, the
single most catastrophic event in modern human history, didn't slow
the march of technology.
[edit]
References
* Oberauer, K. & Wilhelm, O. & Schulze, R. & Süß, H-M. (2005)
Working memory and intelligence - their correlation and their
relation: comment on Ackerman, Beier and Boyle. Psychological
Bulletin, vol. 131, no. 1, s. 61-65.
http://eis.bris.ac.uk/~psxko/Oberauer.et-al.PsychBull.2005.pdf
- Tom
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT