Re: The Relevance of Complex Systems

From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Thu Sep 08 2005 - 23:35:15 MDT


Joel Peter William Pitt wrote:
> The states in a chaotic system *are* able to be predicted with the
> knowledge of initial conditions and the rules they follow. However
> any slight deviation or error in the initial conditions result in
> the system following different trajectories - so, in the real world
> a chaotic system is deterministic, but we can't predict it's future

Correct. When engineers design systems, they make extensive use of
state relationships that act as compression functions, mapping a
large range of input states onto a smaller range of output states
(strictly, they enforce specific sharp or near sharp set constraints
on the state of the cause/input and the state of the effect/output).
Imagine a buffer in digital electronics. There is an infinite range
of input potentials, but all the input states below the threshold
cause an output of 0, while all the outputs above the threshold
result in an output of 1. The buffer has compressed a huge range of
input states down to two, or if you want to account for the slight
variances in output it has enforced the constraints 'input state
member of above_threshold -> output state member of on_states'
and 'input state member of below_threshold -> output state member
of off_states'. Practically the constraint will always very
slightly fuzzy, because below_threshold and above_threshold will be
fuzzy sets that aren't quite disjoint. But because we know that
on_states is a very small subset of above_threshold, and that
off_states is a small sharp subset of below_threshold, and that
these two /are/ disjoint, we can string indefinite numbers of logic
gates together and reliably predict the final state of any
cycle-free network. Synapses are similar but less reliable.

The end result is that we can design highly ordered systems (and
highly useful) systems by employing state space compression. These
systems can be combined by deductive reasoning into supersystems
that we can reliably predict will work before we actually build
them, wheras if we were working with chaos we'd wouldn't be able
to improve much on trial-and-error.

Now the curious thing about AGIs are that they are the ultimate
in state compression functions. When Eliezer talks about
building a 'really powerful optimisation process', he is talking
about building a system that will (reliably) squeeze all the
probability mass in the PDF over possible future histories into
a small subset that the goal system defines as desireable. This
is the very definition of 'goal seeking process'; a process
which predictably tends to steer the state of the world into
desireable states, though the intermediate states and means it
uses to do that may not be so predictable.

In a Bayesian AGI, absolutely everything happens because it is
suspected of helping to achieve some goal. This is what 'causally
clean' means, and it means that the kind of predictability that
applies to the AGI as a whole continues to apply on a more local
scale as you drill down into the components. An AGI which
generates actions by a less rigorous mechanism will probably not
show the same kind of predictability.

> When one tunes a car you adjust timing and other variables in
> order to get the emergent behaviour of a working car

If a 'working car' is emergent behaviour, then the behaviour of
any system that is a collection of subcomponents is 'emergent'
(regardless of predictability), and the term becomes fairly useless.

 * Michael Wilson

        
        
                
___________________________________________________________
Yahoo! Messenger - NEW crystal clear PC to PC calling worldwide with voicemail http://uk.messenger.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT