Re: Computers that improve themselves

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Apr 24 2001 - 01:26:36 MDT


Samantha Atkins wrote:
>
> In the end you can only test as much as you can in lieu of understanding
> exactly how it all works. This is true for Friendly AI / Sysop plans
> also. The AI must self-improve beyond what any human or set of humans
> can design or understand if it is to meet its objectives. If we can
> test a budding SI sufficiently to be reasonably confident of its final
> dependability and benevolence then we should be able to do the same with
> an SI that has had some components created by GAs. Or am I missing
> some critical difference?

If a system is opaque, then your lack of knowledge about the underlying
process means that you know considerably less about whether the observed
behaviors are probably general or probably specific to whatever was used
as a test. To put it another way, a nonopaque system uses testing to
verify your predictions about the behavior, predictions made using your
understanding of the underlying causes. In an opaque,
non-intelligently-generated system, "testing" is half testing and half
discovery, so there's more hypothesizing and less verification.

If a system is nonmodular, you can only test system-level behaviors,
rather than being able to test the components. Whatever component level
is designed by GAs (don't you mean EP or EC?) will not be able to be
broken down further. If you evolved a module, good luck checking
preconditions and postconditions on individual functions.

All this applies only to young AIs, of course, but some of your confidence
in the oak tree may depend on your confidence in the acorn.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT