From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Jun 23 2002 - 08:43:35 MDT
Ben Goertzel wrote:
>
> So far, nothing that you have said has specifically addressed this issue.
> I.e., nothing but your own unsupported intuition has been offered in
> favor of your posited "one month upper bound" for the transition from
> human-level AI to vastly superhuman AI.
>
> Nor have I offered any real evidence in favor of my own intuition that it
> will be a bit slower than this!
>
> What I am questioning is not your confidence that the feedback loop
> itself will exist, but your confidence in your quantitative estimate of
> the speed with which the feedback loop will lead to intelligence
> increase.
Look at this way: Given what I expect the overall shape of the curve to
look like, if you specify that it takes one year to go from human-level AI
to substantially transhuman AI, then it probably took you between a hundred
and a thousand years to get to human-level AI. If you're wondering where
the ability to name any specific time-period comes from, that's where - the
relative speed of the curve at the humanity-point should be going very fast,
so if you plug in a Fudge Factor large enough to slow down that point to a
year, you end up assuming that AI development took a century or more. Even
so I'm not sure you can just plug in a Fudge Factor this way - the
subjective rate of the developers is going to impose some limits on how slow
the AI can run and still be developed.
Seed AI will be a pattern of breakthroughs and bottlenecks. As the AI
passes the human-equivalence point on its way between infrahumanity and
transhumanity, I expect it to be squarely in the middle of one of the
largest breakthroughs anywhere on the curve. If this mother of all
breakthroughs is so slow as to take a year, then the curve up to that point,
in which you were crossing the long hard road all the way up to human
equivalence *without* the assistance of a mature seed AI, must have taken at
least a century or a millennium.
And if it takes that long, Moore's Law will make it possible to brute-force
it first, meaning that the AI is running on far more processing power than
it needs, meaning that when self-improvement takes off there will be plenty
of processing power around for immediate transhumanity. Still no Slow
Singularity.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT