(no subject)

From: rpwl@lightlink.com
Date: Sat Oct 08 2005 - 20:41:54 MDT

> Suppose we have a set E of entities with an operation + defined on it. (For
> instance, E might be the set of subsets of some other set of elementary
> entities; or it might be the set of multisets formed from some other set of
> elementary entities.)
> The + operator may be assumed commutative and associative. Examples of
> suitable + operators would be operations of set or multiset union.
> Consider also a set F of function mapping E into E. Elements of E may be
> mapped into elements of F via considering them as “constant functions.”
> Define an operation * on F as function composition. Of course, * is neither
> commutative nor associative.
> The operation + may be extended from E onto F in an obvious way, so that we
> may now think about {+,*} as algebraic operations on F.
> Distributivity may be assumed with regard to function application, i.e.
> (f+g)(x) = f(x) + g(x)
> Within F, we may assume there is a zero element (the constant function
> mapping everything to zero) so that
> x+0=0+x=x
> 0*f = 0
> and a 1 element (the identity function) so that
> 1*f = f*1 = f
> Finally, let us assume there is a valuation measure H on E, meaning that for
> any x in E, H(x) is defined as some nonnegative real number. Also assume a
> two-component valuation measure H(x,y) so that H(x,0) = H(x).
> Intuitively, H(x) is to be thought of as the complexity of producing x, and
> H(x,y) is to be thought of as the complexity of producing x given y as
> input. This interpretation makes it plausible to assume that

[BTW: my previous question was dumb: I was assuming you were trying to
say something a lot more complicated than you really were. Apologies!]

This idea of the "complexity of producing x" now confuses me.

I have too many questions to list.... could you help by just saying what
this means, and assure me that it does not introduce hidden (i.e. implicit)
assumptions about the possible choices of E, F etc.

So far, the rest of your analysis is analytical: for it to continue to be
so, "complexity" must be analytically defined, not intuitively defined, no?
  In other words, I am worried that the plausibility of the rest may hinge
on the plausibility of what you mean here.

Many thanks.

Richard Loosemore.

> H(f*g) <= H(f) + H(g,f)
> We may now define the notion of pattern as follows:
> f is a pattern in x iff:
> f(0) = x and H(f) < H(x)
> That is, f is a pattern in x if f produces x and yet f is simpler than x.
> As a consequence, it follows that
> f*g is a pattern in x if:
> (f*g)(0) = x and H(f)+H(g,f) < H(x)
> (note that this is if, not iff).
> We may define a distance measure between elements of F as follows:
> D(f,g) = (H(f,g) + H(g,f))/2
> If H(f,g) is interpreted as the complexity of producing f from g as an
> input, then this is a metric (a mathematically valid measure of distance)
> since
> H(f,g) <= H(f,w) + H(w,g)
> from which it follows that D also obeys the triangle inequality. We may
> produce a similarity measure from this by normalizing D to [0,1] via
> D1(f,g) = D(f,g) / [1+D(f,g)]
> and then setting
> sim(f,g) = 1 – D1(f,g)
> Now, what about emergence? Emergence occurs when we have a pattern in a
> combination of two things, that does not occur in either of the two things
> individually. That is, it occurs when one has a situation so that:
> · f is a pattern in x + y
> · There is no g similar to f so that g is a pattern in x or g is a pattern
> in y
> Thus we see that a natural notion of emergence arises from simple notions of
> computation.
> This account can be made probabilistic if we introduce a “reference
> dynamical system” and define H(f,g) as the probability of f being produced
> within time T by the dynamical system beginning from initial condition g.
> For instance, if the reference dynamical system for computing H(f,g)
> consists of a system that creates random computer programs taking g as
> input, choosing each program with probability proportional to 2-(program
> length) , then this probabilistic-dynamic perspective recovers an
> approximation of what one gets from defining H(f,g) as relative Kolmogorov
> complexity.
> -- Ben

This message was sent using Endymion MailMan.

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT