Re: AI hardware was 'Singularity Realism'

From: J. Andrew Rogers (
Date: Sun Mar 07 2004 - 10:24:25 MST

On Mar 6, 2004, at 2:17 PM, Keith Henson wrote:
> That's not surprising considering how much computational power biology
> lavishes on the problem. Have you ever looked up the MIPS rating of a
> retina?

The focus on the amount of computing power required to simulate biology
is a bit of a strawman in the AGI argument because it asserts a
necessarily asymmetric system and then marvels at the asymmetry without
recognizing that it *is* an asymmetry. Modeling any bulk system (like
a retina) is exponentially more expensive than modeling the algorithmic
machine that generates the state.

Simulating biology at this level is akin to using a lookup table for
the first 10^40 digits of pi rather than using the BBP algorithm to
generate the digits you need. And the lookup table is only a
relatively poor approximation of pi, unlike BBP.

Any serious AI effort would have to approach it from the standpoint of
implementing the underlying algorithmic machinery of intelligence. Not
only is this approach tractable, it is also a hell of a lot more likely
to yield useful results than chasing a ghost that most everyone
acknowledges is both intractable and a poor theoretical approximation
in the best case.

And to answer a previous question, I would say that today we ("we" in
the sense of anyone who bothers to study the theoretical issue of AGI)
have a pretty good idea of what is going on in the underlying
algorithmic machinery of intelligence. The grasp isn't perfect and
there are some implementation issues, but no real theoretical
show-stoppers that I can see, and that there are several other people
working on implementations in the same general area seems to indicate
that many other people versed on the subject don't see any serious
show-stoppers either. I'm am cognizant of the history of the field, but
I think we have something actually close to a real and usable
foundation these days.

> I happen to be a bit skeptical that the hardware is up to the task
> based on arguments by Hans Moravec, Ray Kurzweil and others. In the
> long run this is not a problem since hardware equal to the task is
> less than a human generation away. If you have a radical approach
> that would allow cockroach level hardware to generate superhuman AI
> level performance, I would sure like to know what it is.

Almost any approach that ignores biology and goes to the math will be
MANY orders of magnitude more scalable and capable on a given piece of
hardware. Moravec, Kurzweil, and others have biology blinders on, and
I think it is fairly trivial to show that their view is predicated on
some specific assumptions that arguably don't apply in the general

For most of the non-biology AGI projects out there, there seems to be
some consensus that commodity hardware is within an order of magnitude
of what is needed to build human-level intelligence, and that this
"order of magnitude" is not a moving target i.e. experience shows that
we are actually closing on the necessary hardware. The specifics of
the hardware limitations vary from implementation to implementation,
but no one seems to be saying that the hardware is horribly inadequate
to do a functional implementation.

j. andrew rogers

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT