Re: Human intelligence is obviously absurd

From: Keith Henson (hkhenson@rogers.com)
Date: Sat Jan 29 2005 - 11:17:38 MST


At 06:00 AM 29/01/05 -0800, you wrote:
>Suppose that humanity, instead of evolving intelligence on a hundred
>trillion 200Hz synapses,

Excuse me if I am preaching to the converted, but if you don't know Dr.
Calvin's proposed mechanisms for how thinking occurs, it might be worth
taking a look here:

http://williamcalvin.com/bk9/bk9ch2.htm

The problem might not be as large as implied.

Much of what Calvin thinks generates mental activity and eventually
intelligence is in the communication mechanism between cortical "elements,"
groups of roughly 100 cells in a bundle about 0.03 mm in diameter. (There
are detailed drawings in this URL) I think some substantial number of
cells operating in concert is required to get consistent activity. The
jitter in the firing of a single nerve cell may have to be reduced by
putting a mess of them in parallel. The synapse may just be too low a
level, like a transistor in a processor chip.

Essentially Calvin's model involves copying (with errors) of patterns of
activity in the cortical elements, the spreading of these patterns over the
cortical surface, and a full scale Darwinian evolutionary process being
applied to the patterns with sub second time constants.

The physical mechanism is that the sidewise branches of a column near the
cortical surface inhibit columns out to 0.5 mm and excite those at that
distance. Thus if two columns spaced 0.5 mm start singing the same song,
they will entrain others on a triangular grid at the intersections of 0.5
mm circles. This gives rise to a transitory spreading hexagonal pattern of
activation.

Now Calvin might not be right in how this works, but his is the only model
I know of that proposes to explain our mental processes. (Please provide
pointers in this thread for me if you know of other models. I would like
to look at them.)

I am not sure how much of this applies to AI. Ultimately the same laws of
physics apply to birds and aircraft. Perhaps the same processes we use to
get natural intelligence will be required for AI though implemented on
different substrates.

*If* Darwinian selection is an essential part of the process, then the
emergence of intelligence is going to be inherently wasteful of
computational resources. If this feature of intelligence could be
*demonstrated* as being essential, it might comfort those worried about a
transcendent AI emerging on a 386 machine.

>had instead evolved essentially equivalent intelligence on a million 2 GHz
>processors using slightly more efficient serial algorithms (my example
>postulates a factor-of-ten efficiency improvement, no more). Let's call
>these alternate selves Humans.

At a square cm per processor, this would be a ten meter on a side square of
silicon. This is about 100 times off my last wild guess of what it would
take to implement human level processing or ten given your assumption. At
least the power bill would not be a line item in the national budget. :-)

Keith Henson

>Would anyone here dare to predict, in advance, that it was even
>*theoretically possible* to achieve Human-equivalent intelligence on 200Hz
>processors no matter *how* many of them you had?
>
>Even I wouldn't dare. Trying my best to be conservative and to widen my
>confidence interval, my guess is that I would guess 10KHz, or 1KHz given a
>superintelligent programmer, and I would probably have the lowest guess in
>the crowd - both because of my guess that intelligence doesn't require
>much crunch, and because I knew to widen my confidence intervals.
>
>Ben Goertzel would laugh at me, saying that Human-equivalent intelligence
>carried out with one thousand sequential serial operations per second was
>obviously impossible. Perhaps Ben would suggest that I try writing code
>that executed with a bound of ten thousand sequential serial operations,
>to get a feel for how restrictive that limit was.
>
>And if you suggested two hundred serial instructions per second - pfft!
>Now you're just being silly, they would say; and while I might credit you
>for fearless audacity, I probably wouldn't defend you, lest I be tarred
>with the same brush. Like the reaction you might get if you suggested
>that intelligence could run on a 286, or use less than 4KB of RAM, or be
>produced by natural selection.
>
>If any computational neurobiologists were present, they might even be able
>to provide a quantitative mathematical argument, showing that some of the
>basic algorithms known to be used in Human neurobiology intrinsically
>required more than ten thousand serial steps per second. So too did Lord
>Kelvin prove by quantitative calculation that the Sun could not have
>burned for more than a few tens of millions of years.
>
>One of the great lessons of history is that "absurd" is not a scientific
>argument. The future is usually "absurd" relative to the past. Reality
>is very tightly restrained in the kinds of absurdity it presents you with;
>the human history of the 20th century might be absurd from the perspective
>of the 19th century, but not one of those absurdities violated the law of
>conservation of momentum. Even so, "absurd" is not good evidence because
>of the historical observation that the answers we now know were "absurd"
>to people who didn't grow up with our background assumptions. "Obvious"
>is often wrong and "absolutely certain" isn't remotely close to 1.0
>calibration.
>
>Widen the bounds of your confidence interval. Spread the wings of your
>probability distribution, and fly.
>
>--
>Eliezer S. Yudkowsky http://intelligence.org/
>Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT