Re: The Relevance of Complex Systems [was: Re: Retrenchment]

From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Thu Sep 08 2005 - 18:38:52 MDT


Phil Goetz wrote:
> Actually, the main value of complex systems studies may be in
> informing us that this chaotic regime is where our system is
> most productive, and in providing us with the math needed to
> keep it from either "disintegrating" into randomness or falling
> into a stable point or periodic attractor.

If all attractors were bad, that might be true. I am claiming
that many of the attractor in the cognitive structure space are
also local or global optima in reasoning performance, and that
we should aim to start already in such an attractor (one that
stably implements Friendliness). There may well still be a role
for the kind of pattern recognition that one uses to categorise
'emergent' behaviour in internal performance optimisation, but
this is an issue of optimising content (specifically, metadata
on content), not optimising structure. I'm not going to try
and prove that claim at this time, but I reserve the right to
poke holes in any attempts to disprove it, and to make the point
that if it /can/ be done, then it /should/ be done (because
reliable Friendliness is the ultimate goal).

>> trying to use raw probability theory for high level cognition would
>> be as silly as GOFAI symbolic reasoning on detail-free tokens
>
> All along, I was thinking that was what you had in mind, and that
> you were crazy.

Thousands of researchers seem to be convinced that classic symbolic
AI /is/ the way to AGI. While this might be 'crazy' in the absolute
sense, I would consider it the unfortunate result of the misleading
intuitive notions built into human introspection, and thus I wouldn't
say that they're any less sane than the average human. Of course you
may just be being colloquial. Regardless, it would certainly be
'short-sighted', 'silly' and possibly 'ignorant'.

Ben Goertzel wrote:
>> Indeed I can quite easily state that no causally chaotic system is
>> stable under self-modification, and that all such systems will
>> rapidly disintigrate or fall into a causally clean attractor
>> on gaining the ability to self-modify. If you don't accept
>> that an AGI based on 'Complex Adaptive' low-level mechanisms
>> will inevitably fall into a 'non-Complex' attractor, then show
>> us such a system that actually works.
>
> I don't agree with this assertion at all. This doesn't hold for
> other complex self-organizing system and I don't see why you think
> it should hold for AGI's. Do you have any argument to support this
> contention?

I don't agree with that assertion either. I suspect that the majority
of the design space will self-modify towards stable rather than
cyclical attractors, particularly those attractors which are good
approximations of true normative reasoning, but this relies on personal
assumptions which I have not proved and is beside the point. The actual
point is that it's as silly for me to make an outright claim that 'no
emergent design will ever survive self-modification' as it is for
Loosemore to claim that 'no Bayesian design will ever survive self
modification'.

 * Michael Wilson

                
___________________________________________________________
To help you stay safe and secure online, we've developed the all new Yahoo! Security Centre. http://uk.security.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT