From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Jan 03 2004 - 20:58:53 MST
Paul Fidika wrote:
> 
> Speaking of which, why is everyone on this list so confident that
> recursive-self-improvement will even work? It seems to be tacitly assumed on
> this list that the amount of intelligence it will take to create a new
> intelligence will increase logarithmically or at least linearly with the
> sophistication of the intelligence to be created, but what is this
> assumption based upon? For example, if X is some objective and numeric
> measure of intelligence, won't it be more likely that it will take X^2
> amount of intelligence (either iterating over a certain amount of time or
> multiple minds working in parallel) to create an X + 1 intelligence? Or
> worse yet, might it take 2^X amount of intelligence to create an X + 1
> intelligence? Perhaps 2^2^X amount of intelligence? If so, then there are
> very definite limits on how far a super-intelligence will be able to go
> before it "runs out of steam," and it might not be all that far.
I do not yet know how to calculate that curve directly.
Problem is, the equations you're offering can't be fitted to the case of 
hominid evolution, or evolution, or to human culture, or to any of the 
other classic cases of accelerating complexity offered by John Smart, Ray 
Kurzweil, and the like.  Change may not be "exponential", but there is 
certainly no evidence that it slows down over time.  For evolution, the 
rate of what we would call "improvement" seems to be *roughly* 
proportional to the size of the existing complexity base in a single 
organism, dC/dt going roughly as C, a *roughly* exponential growth in 
which new changes take less time.  The attempt to precisely fit curves, 
and the argument that these curves can be extrapolated to precise future 
times, strike me as bogus - see Ilkka Tuomi's refutation of Moore's Law, 
for example.  But in rough terms, the curve seems to belong more on an 
exponential graph than on a linear one.  Perhaps each additional gene 
offers an additional point for possibly good mutations to occur.  Perhaps 
evolution of evolvability accumulates, later organisms being more modular 
or expressing better phenotypic accomodations to mutations.  But the 
curve, even for an utterly wimpy blind-deaf-and-dumb optimization process 
like natural selection, is more exponential than linear, and it is 
certainly not logarithmic.  One could make up various equations that 
behave differently, but they don't fit hominid evolution, mammalian 
evolution, the saga of life on Earth, and so on, nor human cultural progress.
I can see a lot of enormous wins in the design of intelligences, not yet 
taken because they're out of reach for natural selection.  So if the RSI 
curve should run out of steam, as a result of rapidly reaching 
near-perfect optimization and then hitting local resource bounds, it would 
still be far, far above the human ceiling.  Likewise there is no reason to 
rule out the whole process running to the ceiling in what looks to us like 
days, hours, perhaps seconds.  And if so that would firmly determine your 
actual experience in dealing with an AI, regardless of how the internal 
dynamics look in detail.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:28 MST