RE: obstacles to unbounded intelligence

From: Damien Broderick (
Date: Sat Jan 26 2002 - 21:34:35 MST

At 09:05 PM 1/26/02 -0700, Ben wrote:

>William Calvin ("The Ascent of Mind") made a decent argument that the
>reason our brain isn't bigger than it is, is that evolution couldn't figure
>out how to make a woman's pelvis open wider....

Bill Calvin was far from the first to make this point, but speaking of
points, what ya got against us Coneheads? I mean that literally;
cephalization need not be limited to blowing up a balloon evenly all the
way around. Maybe egghead or conehead configurations would be too fragile,
bonewise? Actually I believe eggs are more robust than spheres. Orient the
brain vertically and you have an egg anyway. Hmm. Damned legacy code,
that's what it is, nothing cosmic about it. I wonder what became of the
superintelligent kangaroos? Growing up in (and out of, and back in again) a
pouch should be good for a waxing brain.

>There seem to be OK tricks in the context of my own AI work, but of course I
>don't *really* know how well these tricks will scale.... That would require
>doing some very (but perhaps not impossibly) hard math...

I don't know if this has any salience, but it might be a useful lateral
consideration, since in its realm it appears to run counter to yr
presentiment: from the AIP list...


Bruce Malamud (King's College,
London, and Donald Turcotte (Cornell
University, argued that "fractal"
assessments of natural hazards are often more realistic than older
statistical models in predicting rare but large disasters. He cited as
an example the great Mississippi flood of 1993; a fractal-based
calculation for a flood of this magnitude predicts one every 100
years or so, while the more-often-used "log-Pearson" model
predicts a period of about 1500 years.
    In the realm of earthquakes, John Rundle (who heads the
Colorado Center for Chaos and Complexity at the University of
Colorado,, 303-492-1149) described a
model in which the customary spring-loaded sliding blocks used to
approximate individual faults have a more realistic built-in leeway
(or "leaky thresholds," not unlike "integrate-and-fire" provisions
used in the study of neural networks) for simulating the way in
which faults jerk past each other. Applying these ideas to
seismically-active southern California, 3000 coarse-grained
regions, each 10 km by 10 km (the typical size for a magnitude-6
quake) are defined. Then a coarse-grained wave function,
analogous to those used in quantum field theory, is worked out for
the region, and probabilities for when and where large quakes
would occur are determined. Rundle claims to have good success
in predicting, retroactively, the likelihood for southern-California
earthquakes over the past decade and makes comparable
prognostications for the coming decade. (See also Rundle et al.,
Physical Review Letters, 1 October 2001; and Rundle et al.,
PNAS, in press).
      At the AGU meeting Mandelbrot himself delivered the first
Lorenz Lecture, named for chaos pioneer Edward Lorenz.
Mandelbrot discussed, among other things, how the process of
diffusion limited aggregation (DLA) is characterized by not one
but two fractal dimensions. DLA plays a key role in many natural
phenomena, such as the fingering that occurs when two fluids
interpenetrate. In a DLA simulation, one begins with a single seed
particle. Then other particles, after undergoing a "random walk,"
attach themselves to the cluster. This results in a branching
dendritic-like structure in which the placement of new particles is
subject to the blockage of existing limbs. You can study the
dimensionality of this structure by drawing a circle and counting
the number of particles lying on the circle at that radius out from
the original seed particle, and counting up the angular gaps
between branches at that radius.
     For many years studies of DLA have been confused by
conflicting reports as to the underlying fractal dimensionality.
Now Mandelbrot (at both IBM 914-945-1712, and at Yale,, Boaz
Kol, and Amnon Aharony (, 972-3-640-
8558 at the University of Tel Aviv) have shown by employing a
massive simulation involving 1000 clusters, each of 30 million
particles (previous efforts had used no more than tens of thousands
of particles) that two different dimensionalities are always
present, but this only becomes apparent in huge simulations.
Comparing a modest (10^5 particles) and a large (10^8 particles)
simulation shows that the larger cluster is not merely a scaled up
version of the smaller (see figures at
These results (4 February 2002 issue of Physical Review Letters)
are the first quantitative evidence for this type of nonlinear self


Damien Broderick

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT