Re: Computers that improve themselves

From: Samantha Atkins (samantha@objectent.com)
Date: Tue Apr 24 2001 - 02:48:42 MDT


"Eliezer S. Yudkowsky" wrote:
>

> If a system is opaque, then your lack of knowledge about the underlying
> process means that you know considerably less about whether the observed
> behaviors are probably general or probably specific to whatever was used
> as a test. To put it another way, a nonopaque system uses testing to
> verify your predictions about the behavior, predictions made using your
> understanding of the underlying causes. In an opaque,
> non-intelligently-generated system, "testing" is half testing and half
> discovery, so there's more hypothesizing and less verification.
>

Can't the SI redesign some of its components in ways that you don't
necessarily follow? There is a limit to how much complexity humans can
handle that the SI would soon go beyond. Even without a self-improving
SI
at work, humans quite often design and implement software systems that
they in practice do not understand nearly as well as they thought they
did or would.

Complex systems tend to grow increasingly opaque even when not
self-improved or evolving. At some point, usually quite early, the why
of a system's behavior cannot be usefully deduced from modeling how its
components work.

> If a system is nonmodular, you can only test system-level behaviors,
> rather than being able to test the components. Whatever component level
> is designed by GAs (don't you mean EP or EC?) will not be able to be
> broken down further. If you evolved a module, good luck checking
> preconditions and postconditions on individual functions.
>

If the code is self-modifying then you may not long even know what the
individual functions are, much less what their pre- and post-conditions
have become.

Even if it is modular and not dynamically self-modifying, the
interactions between the modules can be extremely complex and nearly
impossible to model well. It is one of those areas of CS where there is
some theoretical work but not a lot that has been cast into actual
working tools.

At what level do you model your modules at, along what dimension[s] are
they decomposed and at what level are pre/post-conditions expressed?
Most systems are decomposed along a single dimension and other
dimensions such as internal represensations and data structures is
hidden behind the exposed dimension's API. This is bad if your
particular application for that component requires a different internal
representation in order to be efficient and not require a bunch of
really ugly hackery to work at all. Rewrites of internal implementation
details and/or exposing orthogonal APIs for controlling the internals
are one of the ways modules get optimized in running systems. But the
entry-exit conditions tend to be bound up to some degree in the
representation. So there is a second-level problem in suitably
modifying those. Tests based on pre-, post-conditions would also have
to be evolved for the newly optimized version. And all of this may
change from invocation to invocation of the same module depending on the
needs of the invoking context. I believe a system could be build to
change in such a fashion and to self-test but it would be a far stretch
for me to believe that humans are going to know how to test the result
in fine-grained function level detail.

 
> All this applies only to young AIs, of course, but some of your confidence
> in the oak tree may depend on your confidence in the acorn.

Well, we have confidence in the acorn because we have seen acorns that
look roughly like this one produce oak trees roughly like those over
there often enough. I don't see how this applies with a brand new kind
seed for an intelligent self-modifying (beyond what current biological
systems can do) system such as has never before existed.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT