Floppy take-off

From: Carl Feynman (carlf@abinitio.com)
Date: Mon Jul 30 2001 - 16:05:45 MDT


I'm going to try to estimate how much brainpower it takes to improve an
AI substantially. And I'm going to be optimistic, but the results will
still be pretty bad.

There are four potential sources of intellect increase with time:
1. Moore's law. We all know how that works.
2. Increased funding to buy more hardware. This is swamped by Moore's
Law.
3. Tighter coding-- translating everything from LISP to assembler. This
is worthwhile, but produces at most a one-time gain of a factor of ten,
and probably less.
4. Improved algorithms. This can make huge improvements. From 1950 to
1980, computers got a million times faster. But techniques for solving
elliptic PDEs also got a million times more efficient, producing an
overall gain of a trillion. This is the kind of improvement Eliezer is
expecting when he talks about self-improving AI. I will now try to
estimate the cost of such gains.

The AI field has given up (by and large) trying to create general
intelligence. This has been the case for about ten years. But during
those ten years (which I will call "the nineties") lots of progress has
been made on various subproblems. (The AI field has picked the wrong
subproblems, but that's a polemic for another day. The important thing
is that progress has been made on the chosen problems). For example,
here's paper describing a technique that accelerates solutions to
satisfiability problems by between two and four orders of magnitude:

Gomes, C.; Selman, B.; and Kautz, H. 1998. Boosting combinatorial search
through randomization. In Proceedings of 15th National Conference on
Artificial Intelligence, 431--437. AAAI Press/The MIT Press.

And the insight of the paper can be squashed into a few sentences in its
abstract. Now this seems like a nifty thing: a simple insight that
produces huge gains. All we need is a few of those and we're golden.
But how long did it take to produce this insight? Well, it took three
authors of the paper, working for a year or so. And it also took many
thousands of hours of simulation to develop the statistical insight that
led to the improvement. And it took all the time the authors were in
school being trained, but let's not count that, because if the authors
were AIs, we could amortize it over all of them. So far it doesn't seem
too bad: three person-years, for three orders of magnitude of
improvement. But let's look at all the avenues of investigation that
led nowhere. Here's another example result:

C.R. Feynman, H.L. Voorhees, L.W. Tucker, "Massively parallel approach
to
object recognition" (IRCV '88)

I can confidently state that our development of a parallel feature-based
reconition algorithm has had no effect whatever on present-day AI. But
at the time, we thought we were doing good stuff, and we spent years on
it. The problem is that you can't tell, while you're doing it, whether
what you're doing will lead anywhere. So we have to count the cost of
not just the research that worked, but of the whole field of AI, most of
which led nowhere.

So let's say that during the nineties the field improved its algorithms
by a factor of a thousand (some areas improved more than this, some
less). And let's say this was done by a thousand people working for ten
years (a low estimate of how many people are in the field). So doubling
the power of an AI takes about 2000 person-years. We will begin sliding
into a singularity when the rate of self-improvement exceeds the rate of
improvement due to Moore's law. This will require an AI able to do 2000
person-years of thinking in eighteen months. In other words, we will
need an AI of 1.3 kilobrain capacity. I don't know what that is in
flops, but it's an awful lot no matter how you slice it.

Let's be optimistic and say that Webmind had an AI with a capacity of
0.5 brains. It will take Moore's Law about 16 years to upgrade their
machine to 1.3 kilobrains. If we assume that the rate of progress in AI
algorithms (doubling every two years) continues, and that the AI field
is working on the right problems, the time is decreased to about ten
years. Still pretty long.

Boy, I hope I'm wrong.

--Carl Feynman



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT