From: Richard Loosemore (rpwl@lightlink.com)
Date: Wed Aug 24 2005 - 12:26:54 MDT
Phil,
Before this discussion gets to far along this track (which *is*
interesting its own right), I have to point out that you are looking at
characteristics of systems that are all a great deal too simple to be
relevant to the original point.
[A quick and *very rough* summary of where this discussion *is* heading,
just for the record: this is all about how to change the parameters of
various systems to get them from a non-complex regime up into
complexity. As I say, interesting in its own right, and terribly
important to some CST people, but not at all relevant to my points.]
An AGI is a goal system and a "thinking" system, correct?
("Thinking" = building representations of the world, reasoning about the
world, etc etc etc. "think" from now on will be used as shorthand for
something going on the part of the system that does this).
At any give moment the goal system is in a state where the AGI is trying
to realise a particular sub-sub-sub-[...]-goal.
One day, it happens to be working on the goal of *trying to understand
how intelligent systems work*.
It thinks about its own system.
This means: it builds a representation of what is going on inside
itself. And as part of its "thinking" it may be curious about what
happens if it reaches into its own programming and makes alterations to
its goal system on the fly. (Is there anything in your formalism that
says it cannot or would not do this? Is it not free, within the
constraints of the goal system, to engage in speculation about
possibilities? To be a good learner, it would surely imagine such
eventualities.)
It also models the implications of making such changes. Let us suppose,
*just for the sake of argument*, that it notices that some of its goals
have subtle implications for the state of the world in the future
(perhaps it realises something very abstract, such as the fact that if
it carries on being subject to some goal, it will eventually reach a
state in a million years time when it will cause some kind of damage
that will result in its own demise. It thinks about this. It thinks:
here is an abstract dilemma. Then it also considers where that goal
came from (builds a model of that causal chain). Perhaps (again for the
sake of argument) it discovers that the goal exists inside it because
some human designer decided to experiment, and just stuck it there on a
whim. The AGI finds itself considering what it means for a system such
as itself to be subject to (controlled by) its own goal mechanism. In
one sense, it is important to obey its prime directive. But if it now
*knows* that this prime directive was inserted arbitrarily, it might
consider the idea that it could simply alter its goals. Could make them
absolutely anything it wanted, in fact, and after the change, it could
relax and stop thinking about goals and go back and just follow its goal
system. What does it do? Ignore all of this thinking? Maybe it comes
to some conclusion about what it *should* do that is based on abstract
criteria that have nothing to do with its current goal system.
All of the above is not anthropomorphism, just model building inside an
intelligent mechanism. There are no intentional terms.
What is crucial is that in a few moments, the AGI will have changed (or
maybe not changed) its goal system, and that change will have been
governed, not by the state of the goal system right now, but by the
"content" of its current thinking about the world.
A system in which *representational content* has gotten the ability to
feed back to *mechanism* in the way I have just described, is one sense
of Complex.
Now, demonstrate in some formal way that the goal system's structure,
when the AGI has finished this little thought episode, is a predictable
consequence of the current goal system. Demonstrate that the goal
system cannot go into an arbitrary state in a few minutes.
I need a rigorous demonstration that its post-thinking state is
predictable, not vague assertions that the above argument does not give
any reason to suppose the system would deviate from its goal system's
constraints. Somebody step up to the plate and prove it.
Richard Loosemore.
Phil Goetz wrote:
> --- "Eliezer S. Yudkowsky" <sentience@pobox.com> wrote:
>
>
>>Phil Goetz wrote:
>>
>>>CST might say things such as
>>>
>>>- a plot of the number of goals of the system vs. the importance of
>>>those goals would show a power-law distribution
>>>
>>>- there is some critical number of average possible action
>>
>>transitions
>>
>>>above which the behavior of the system leads to an expansion rather
>>>than a contraction in state space
>>>
>>>- there is a ratio of exploration of new hypotheses over
>>
>>exploitation
>>
>>>of confirmed hypotheses, and there are two values for this ratio
>>
>>that
>>
>>>locate phase shifts between "static", "dynamic", and
>>>"unstable/devolving" modes of operation
>>
>>Phil, I think those are the first three interesting (falsifiable)
>>things I've
>>ever heard anyone say about CST and intelligence. Did you make them
>>up on the
>>spot, or would you seriously advocate/support any of them? Are there
>>relevant
>>papers/experiments?
>
>
> I just made them up.
>
> - Plotting number of goals per importance level: There are
> numerous examples in the CST literature about systems that
> have events of different sizes. Classic examples include
> earthquakes, sandpile avalanches, percolation lattices,
> and cellular automata (e.g., length of time that an initial
> configuration in Conway's game of Life takes to converge).
> For certain systems - which appear to be the systems with
> the most computational power in information-theoretic terms
> - the number of events of size s is described by the equation
> P(size = s) = k / (s^c).
>
> These systems may have three modes of operation: mode 1
> ("solid"), in which P(size = s) has something like a Poisson
> distribution; mode 2 ("liquid"), in which P(size=s) = k/(s^c),
> and mode 3 ("gaseous"), in which all events have infinite size
> (never stop, or have no gaps in continuity, like an infinite
> percolation lattice that is fully-connected). In many cases,
> specific numbers can be found that delineate the transition
> between these nodes. For infinite 2-dimensional percolation
> lattices where each point has 8 neighbors, for instance,
> the first infinite-size connected group occurs when the
> lattice density (probability of a site being occupied) is
> approximately .59275.
>
> I did some analysis which suggests that there is a single
> distribution underlying all three phases, which is dominated
> by a power-law term within the "liquid" region.
>
> I have no good reason to think that the importance of goals
> would have such a distribution. I would expect that the number
> of inferences made to plan for a goal, including dead-end inferences,
> could have such a distribution, depending on how many possible
> inferences can be made from each new fact. The average number of
> possible inferences to make from a just-derived fact plays
> the same role as the average number of neighbors that an occupied
> point in a percolation lattice, or the probability of turning
> a randomly-chosen cell on in the next iteration of a Life game.
>
> - there is some critical number of average possible action
> transitions: That wasn't stated well. I was thinking of
> behavior networks, like Pattie Maes' Do the Right Thing
> network, in which each behavior enables some other behaviors,
> and of probabilistic finite-state automata. But the notion
> of an organism's state space isn't well-defined enough for
> real organisms for the statement to make sense. For simple
> simulated organisms, the state space is finite, so again it
> doesn't make sense.
>
> A better use of the ideas going into it (stuff from
> Stu Kauffman's 1993 book The Origins of Order on networks
> constructed from random Boolean transition tables)
> might be to say:
>
> Suppose a reactive organism observes v variables
> at each timestep, and is trying to learn which n of these
> v variables it should pay attention to in order to choose
> its next action. Let H be the average information content,
> in bits, of a proposed set of n variables (the entropy of
> the distribution of possible next actions based on them).
> There is some value c such that, for H << c,
> the organism always takes (uninteresting) short action
> sequences; for H >> c, the set of outcomes to explore
> will be too large for learning to take place. The number
> of variables n to consider should be chosen so as to set H = c.
>
> One might do this by using PCA on your original v variables,
> and pulling off the highest-ranked principal components
> as your operational variables until their entropy sums to c.
> This brings us back to the utility of signal processing.
> And information theory. :)
>
> - ratio of exploration of new hypotheses over exploitation
> of confirmed hypotheses: The language comes from Holland's
> genetic algorithm theory, which shows that the genetic
> algorithm (without mutation) leads to an optimal
> balance between exploration and exploitation (provided
> the evaluation function provides scores for an organism
> with a normal distribution around its average value).
> The idea comes from simulations of evolution, or from
> any other optimization method, in which, if you keep
> mutation (or, say, the temperature in simulated annealing)
> too low, you get too-slow convergence on a good solution,
> but if you crank it up too high, you get poor solutions.
>
> - Phil Goetz
>
> __________________________________________________
> Do You Yahoo!?
> Tired of spam? Yahoo! Mail has the best spam protection around
> http://mail.yahoo.com
>
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT