From: Joel Peter William Pitt (joel.pitt@gmail.com)
Date: Thu Sep 08 2005 - 18:02:40 MDT
On 9/8/05, Phil Goetz <philgoetz@yahoo.com> wrote:
> 
> --- Michael Wilson <mwdestinystar@yahoo.co.uk> wrote:
> 
> > Indeed I can quite easily state that no causally chaotic system is
> > stable under self-modification, and that all such systems will
> > rapidly disintigrate or fall into a causally clean attractor
> > on gaining the ability to self-modify. If you don't accept
> > that an AGI based on 'Complex Adaptive' low-level mechanisms
> > will inevitably fall into a 'non-Complex' attractor, then show
> > us such a system that actually works.
> 
> Actually, the main value of complex systems studies may be in
> informing us that this chaotic regime is where our system is
> most productive, and in providing us with the math needed to
> keep it from either "disintegrating" into randomness or falling
> into a stable point or periodic attractor.
> 
This would be interesting if anybody knew of research on adaptive systems 
that could tune themselves to an optimal state of Complexity/Extropy on the 
edge of chaos and keep themselves there or in that general vicinity.
*If* complex systems theory is of any use to an AI (I know lots of you think 
it isn't) then I wonder if different degrees of order/chaos (still on the 
"edge of chaos" however) are more suited to solving or reasoning over 
certain problems. 
-joel
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT