RE: When does it pay to play (lottery)?

From: Ben Goertzel (ben@goertzel.org)
Date: Mon Jan 24 2005 - 18:32:00 MST


> Backprop NNs don't concern me because they are (usually) not open-ended
> optimization processes. A GA is an open-ended optimization
> process. So is
> one replicator or replicating hypercycle.

Yes, but a GA is a very simple, stupid and inefficient open-ended
optimization process.

And, GA/GP's are always run with a fixed-size or max-size genome, which is
far too small to encompass any kind of intelligently behaviing program.

The reason is that if you make the genome too big, the simple learning
algorithm of GA/GP is too dumb to learn anything with plausible population
sizes...

>
> Unintelligence coughed up intelligence once, and the spark of that fire
> (before it accumulated all the complexity to which you refer) was less
> sophisticated and certainly less directed than today's GAs, which
> come into
> existence carrying fully-formed complexity that might have taken
> a billion
> years to evolve on Earth, or, in some cases, would not be accessible to
> natural selection at all.

This is just not true. The primordial soup was vastly more complex than
today's GA's, because it relied on chemistry, which is astoundingly subtler
than the bit strings or simple program trees in GA/GAP...

> The computational path from unintelligence to intelligence
> exists. It was
> climbed once in the total absence of thoughtful design, just from steady
> optimization pressure.

Yeah, but the path from chemical complexity to biological complexity to
mental complexity is a totally different animal than the path from a
bit-string GA or Koza-style GP to anything. There are all kinds of
self-organizational mysteries underlying soups of chemical compounds,
including potentially funky quantum phenomena involved with protein folding,
etc. Not so for GA/GP....

>Don't tell me it can't happen again, not
> unless you
> can calculate your answer.

Eli, we can't prove via calculations that the solar system won't collapse
tomorrow, as physics hasn't yet solved the newtonian N-body problem....

The mathematical work of David Goldberg and Michael Vose on the GA lets us
estimate what kinds of programs can be learned via GA with what population
sizes. This work makes pretty clear -- though does not prove
definitively -- that these methods will never be able to learn anything like
human-level intelligent behavior, even with billions of times as much
computational power as we have today.

Novamente uses GP-like techniques, so I don't think these methods are bereft
of power for general AI. But on their own they are clearly not sufficient.

-- Ben



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:51 MST