Re: Systems engineering

From: Dani Eder (danielravennest@yahoo.com)
Date: Fri May 20 2005 - 08:05:18 MDT


> unless you believe that the
> sole reason AGI hasn't been cracked yet is
> inadequate computing
> power.
>
> * Michael Wilson

Not the sole reason, but a major reason. I believe
there is a horizon for AGI that is a function of how
sophisticated your design is and how much computing
power you have. At the brute force end of
sophistication, an accurate enough model of neurons
with enough neurons to equal a human brain should
work, because it works for us. Depending on what
you think 'accurate enough' is, the computing power
required is estimated to be in the 100 to 100,000
TFlop
range. The most powerful computer ever built,
Blue Gene-L, has just cracked the lower boundary
of that range. It will be a few more years before
any AI researchers get their hands on a machine that
powerful. Thus, so far, inadequate computing
power has controlled at that end.

Since evolution is non-directed, and neural
development
is likely sloppy and inefficent, I expect that
purposeful design should be able to work more
efficiently than a brute force simulation.
The unknown right now is how the slope of computing
power vs. design complexity runs. Given the
high starting point at the simple simulation end
of the chart, I find it plausible we are still
below the required computing power horizon at all
points.

As a made-up example, assume a brute force simulation
requires 3,000 TFlop, that a design team of 10 can
better that by a factor of 10, and that a design team
of 1000 can better that by another factor of 10.
30 Tflop is just becoming or will soon be available
in an academic setting, but as far as I know no AI
project has a 1000 person design team, and no one
has had 30 Tflop machines long enough to do the
software development or let the AI learn on its own.

When 100 TFlop desktop PCs have been around for a
decade and we still don't have an AI, I'll tend to
think there is some other obstacle at work. Until
then or until someone provides convincing reasoning
that it will take vastly less computing power than
the rough estimates above, I'll tend to think the
problem has been insufficient computing power.

The slope of the intelligence horizon contributes
to the likelyhood of a fast singularity. When the
AI itself becomes part or all of the design team,
it can follow a trajectory that rapidly crosses
or moves above the horizon. This can be a combination
of more efficient implementation on the same
hardware and replication on additional hardware.
If the slope of the intelligence horizon is steep
enough, it leads to a runaway improvement. If the
slope is shallow, the improvements will converge
to a stable level.

Dani

                
Yahoo! Mail
Stay connected, organized, and protected. Take the tour:
http://tour.mail.yahoo.com/mailtour.html



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT