From: Samantha Atkins (email@example.com)
Date: Sat Jan 01 2005 - 14:09:53 MST
I am not sure I see the difficulty. If one has ways of measuring
correctness/degree of fit to a goal of results given problem context
and the efficiency with which a nominally correct solution was arrived
at and means for tweaking the mechanisms employed to reach the
solution then even something like a GA is in principle capable of
generating progressive improvments to the system.
I don't see that recursive self-improvment requires that the
[super]goal itself is changing. So what is the problem?
On Sat, 1 Jan 2005 05:57:14 +0000, Russell Wallace
> On Fri, 31 Dec 2004 23:14:11 -0500, Eliezer Yudkowsky
> <firstname.lastname@example.org> wrote:
> > You mean that when you write code, you have no way of knowing what any
> > individual module does, except letting it run for an open-ended period of
> > time? That must make it awfully difficult to write code. How many
> > randomly generated strings do you need to test before you find one that
> > compiles?
> The distinction is between writing code to a specification, and
> deciding what the specification should be.
> If I know what output a program needs to produce for a given input,
> and write code to do this... well, in practice, I can never be quite
> sure my code is correct, and it needs to be tested to be even somewhat
> sure of that. However, if I know the desired output well enough that I
> could write it into a formal spec, I _could_ in principle write code
> together with a formal proof of correctness (not that there's ever
> time to do that in practice, but in principle an AI might be able to
> do this faster than I can).
> However, formal proof is relative to a formal specification. Improving
> the intelligence of an AI (as opposed to merely its computational
> efficiency) involves changing the output - so if there was a formal
> specification of the old version, the specification will have to be
> changed; therefore, formal proof no longer applies.
> So how do you propose to solve this problem?
> (Disclaimer of personal bias: I'd like full recursive self-improvement
> to work, so I'm hoping for a reply that'll make me go "oh, yes, I
> hadn't thought of that; now I see how that could work", though I
> unfortunately think that's unlikely.)
> - Russell
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT