Re: AI hardware was 'Singularity Realism'

From: Keith Henson (hkhenson@rogers.com)
Date: Sun Mar 07 2004 - 19:30:38 MST


At 09:24 AM 07/03/04 -0800, you wrote:

>On Mar 6, 2004, at 2:17 PM, Keith Henson wrote:
>>That's not surprising considering how much computational power biology
>>lavishes on the problem. Have you ever looked up the MIPS rating of a retina?
>
>The focus on the amount of computing power required to simulate biology is
>a bit of a strawman in the AGI argument because it asserts a necessarily
>asymmetric system and then marvels at the asymmetry without recognizing
>that it *is* an asymmetry. Modeling any bulk system (like a retina) is
>exponentially more expensive than modeling the algorithmic machine that
>generates the state.

*What* the retina does, feature extraction, motion detection, compression
is fairly well understood. Doing it with slow (biology kind of) hardware
takes a whacking lot of it in parallel. Considering the resolution of the
retina and the known update speed, and comparing it to Photoshop doing
something of the same sort, I can safely state that my retinas have a *lot*
more power than the 2GHz processor on this machine.

>Simulating biology at this level is akin to using a lookup table for the
>first 10^40 digits of pi rather than using the BBP algorithm to generate
>the digits you need. And the lookup table is only a relatively poor
>approximation of pi, unlike BBP.

I don't get the analogy you are trying to make here.

>Any serious AI effort would have to approach it from the standpoint of
>implementing the underlying algorithmic machinery of intelligence.

Aircraft might be an analogy supporting your contention. Aircraft didn't
use flapping wings to fly, no birds use propellers. Still, it really helps
to understand that the physics of flight apply to both.

>Not only is this approach tractable, it is also a hell of a lot more
>likely to yield useful results than chasing a ghost that most everyone
>acknowledges is both intractable and a poor theoretical approximation in
>the best case.

Short range I agree that trying to simulate a brain is probably beyond
us. Long range, especially given nanotech I don't see any difficulty. As
far as biological brains being "a poor theoretical approximation in the
best case," I really don't understand how you make such a statement. On
the other hand, maybe I do.

>And to answer a previous question, I would say that today we ("we" in the
>sense of anyone who bothers to study the theoretical issue of AGI) have a
>pretty good idea of what is going on in the underlying algorithmic
>machinery of intelligence. The grasp isn't perfect and there are some
>implementation issues, but no real theoretical show-stoppers that I can
>see, and that there are several other people working on implementations in
>the same general area seems to indicate that many other people versed on
>the subject don't see any serious show-stoppers either. I'm am cognizant
>of the history of the field, but I think we have something actually close
>to a real and usable foundation these days.

Is there a pointer you could suggest where there is a simple explanation of
generating intelligence? I am seriously interested in this, among other
reasons because I make the case that humans have (evolved, gene
constructed) psychological traits that sabotage intelligence in certain
situations.

>>I happen to be a bit skeptical that the hardware is up to the task based
>>on arguments by Hans Moravec, Ray Kurzweil and others. In the long run
>>this is not a problem since hardware equal to the task is less than a
>>human generation away. If you have a radical approach that would allow
>>cockroach level hardware to generate superhuman AI level performance, I
>>would sure like to know what it is.
>
>Almost any approach that ignores biology and goes to the math will be MANY
>orders of magnitude more scalable and capable on a given piece of hardware.

Again, the aircraft analogy. There is no bird with the takeoff weight of a
747.

>Moravec, Kurzweil, and others have biology blinders on, and I think it is
>fairly trivial to show that their view is predicated on some specific
>assumptions that arguably don't apply in the general case.
>
>For most of the non-biology AGI projects out there, there seems to be some
>consensus that commodity hardware is within an order of magnitude of what
>is needed to build human-level intelligence, and that this "order of
>magnitude" is not a moving target i.e. experience shows that we are
>actually closing on the necessary hardware. The specifics of the hardware
>limitations vary from implementation to implementation, but no one seems
>to be saying that the hardware is horribly inadequate to do a functional
>implementation.

Hmm. If the hardware is within an order of magnitude, then the AGI should
just be an order of magnitude slower. I.e., it could not keep up with a
chat room, but it could take part in email list exchanges.

You may be right. I don't know if I should be thankful or terrified.

Keith Henson

PS. If you can find them, I recommend the series of books by Alexis Gilliland

REVOLUTION FROM ROSINANTE
LONG SHOT FOR ROSINANTE
THE PIRATES OF ROSINANTE

These books are (I think) both the best ever done on a space colony theme
and the best on AIs and the interactions of AIs with people. It has the
first seduction scene between a human and an AI as well as an AI that
writes up a religion and another one that proselytizes. Further, it has a
legal mechanism for giving AIs legal rights that is just inspired.

>j. andrew rogers



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT