From: Rolf Nelson (rolf.h.d.nelson@gmail.com)
Date: Sun Sep 16 2007 - 14:38:14 MDT
Here's a list I wrote up of AI takeoff and non-takeoff scenarios,
including the recent botnets scenario. Any additions? The ones marked
"Weak scenario" I'm thinking of dropping as I consider them much more
unlikely than the other scenarios.
-Rolf
Strong AI Takeoff Scenarios
Things that could cause the current Moore's Law curve to get blown away:
1. Enormous nanotechnology advances
2. Enormous biotech advances, allowing dramatic intelligence
augmentation to human brains
3. Enormous quantum computing advances. (Weak scenario)
Things that could cause a sudden, large (1000x) increase in hardware
devoted to AI self-improvement:
3. Self-improving AI, nowhere near *general* human-level intelligence
yet, is suddenly put in charge of a botnet and swallows most of the
Internet.
4. Self-improving AI, nowhere near human-level intelligence yet,
figures out how to beat the stock market, on the order of a trillion
dollars. Its owner re-invests much of his massive winnings in more
hardware. (Weak scenario)
5. Most of the skeptics become convinced; Strong AI (and, hopefully,
Friendliness) research dramatically increases, maybe even resulting in
an Apollo Project-level program.
Other scenarios:
6. The usual "recursive self-improvement" scenario
7. We were too conservative. Turns out creating a Strong AI on a
mouse-level brain is easier than we thought; we just never had a
mouse-level brain to experiment with before.
Things that could prevent Strong AI, or delay it by many decades
1. Turns out creating hardware sufficiently powerful to compete with a
human brain is harder than we thought. Note the "maybe glial cells
play as much of a role as the neurons" argument, if true, only delays
us a few years. But, perhaps there is something dramatically wrong
about our current understanding of how much computing power a single
neuron has. (Weak scenario)
2. Giant disaster or war, say on a scale that kills off more than 20%
of humanity in a single year.
3. Moore's Law hits a brick wall, as do biotech and nanotechnology,
but all for different reasons. (Weak scenario)
4. Getting the software right turns out to be much much harder than we
thought. Sure, natural selection got it right on Earth, but it has an
infinite universe to play with, and on the 10^10000 other planets
where it failed, no one is around to ask questions.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:58 MDT