Re: Definition of strong recursive self-improvement

From: Russell Wallace (russell.wallace@gmail.com)
Date: Fri Dec 31 2004 - 22:57:14 MST


On Fri, 31 Dec 2004 23:14:11 -0500, Eliezer Yudkowsky
<sentience@pobox.com> wrote:
> You mean that when you write code, you have no way of knowing what any
> individual module does, except letting it run for an open-ended period of
> time? That must make it awfully difficult to write code. How many
> randomly generated strings do you need to test before you find one that
> compiles?

The distinction is between writing code to a specification, and
deciding what the specification should be.

If I know what output a program needs to produce for a given input,
and write code to do this... well, in practice, I can never be quite
sure my code is correct, and it needs to be tested to be even somewhat
sure of that. However, if I know the desired output well enough that I
could write it into a formal spec, I _could_ in principle write code
together with a formal proof of correctness (not that there's ever
time to do that in practice, but in principle an AI might be able to
do this faster than I can).

However, formal proof is relative to a formal specification. Improving
the intelligence of an AI (as opposed to merely its computational
efficiency) involves changing the output - so if there was a formal
specification of the old version, the specification will have to be
changed; therefore, formal proof no longer applies.

So how do you propose to solve this problem?

(Disclaimer of personal bias: I'd like full recursive self-improvement
to work, so I'm hoping for a reply that'll make me go "oh, yes, I
hadn't thought of that; now I see how that could work", though I
unfortunately think that's unlikely.)

- Russell



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT