From: John Smart (email@example.com)
Date: Sat Apr 14 2001 - 12:41:48 MDT
> Scenario B. You have a slow, clunky assembly of programs
> that suck processor
> time and gobble up memory in ways that make particle collision analysis
> systems look tame. Just to run it you've got to spend crazy cash leasing
> supercomputers. But, IT WORKS. The damn thing improves itself
> without much, if
> any, help.
> Patrick McCuller
Well spoken, Patrick! It seems we keep coming back to adaptation as the key
question. How do we know *when* our systems are on a trajectory toward
increased general adaptive complexity? increased autonomy? We'll throw
money and brains at the bottlenecks if we know we're progressing. It keeps
coming back to defining the important problems in cognitive neuroscience,
and then you know whether your GAs/NNs are doing what you want.
Larry Fogel just gave a talk at CSEOL at UCLA
on this issue. Peter Voss and I were discussing that last night.
His brother David has done some interesting work in the elegantly simple
system of checkers, you may recall:
Chellapilla, K. and Fogel, D.B. 1999. Evolving neural networks to play
checkers without expert knowledge. IEEE Transactions on Neural Networks.
Vol. 10, Number 6: 1382-1391
An article on this:
It's not quite Chinook yet, but it's totally home grown :)
David's got other nice emergences in the pot:
Fogel, DB, Wasson EC, Boughton EM, and Porto VW (1998) "Evolving Artificial
Neural Networks for Screening Features from Mammograms," Artificial
Intelligence in Medicine, in press.
I've written a mini review on a good book on this approach in embodied
"Understanding Intelligence" at my site. Feedback always appreciated.
Understanding Accelerating Change
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT