Re: Definition of strong recursive self-improvement

From: Randall Randall (randall@randallsquared.com)
Date: Sat Jan 01 2005 - 00:41:55 MST


On Dec 31, 2004, at 11:14 PM, Eliezer Yudkowsky wrote:
> Russell Wallace wrote:
>> Yes; and both of these would in the long run encounter the same
>> fundamental problem that a Strong RSI AI must encounter, as far as I
>> can see. Again, the problem is, how do you know whether a putative
>> improvement is in fact an improvement? There is no way of finding out
>> other than by letting it run loose for an open-ended period of time,
>> in which case you're back into evolution by natural selection. Do you
>> think you have found a way to solve this problem? If so, how?
>
> You mean that when you write code, you have no way of knowing what any
> individual module does, except letting it run for an open-ended period
> of time? That must make it awfully difficult to write code.

I think I understand the question being asked here,
and I think it's an important one, so let me try to
ask it or a related one in a different way:

When you write code, you simulate, on some level,
what the code is doing in order to determine whether
the algorithm you're writing will do what you intend.
However, no amount of simulation will allow you to
design an algorithm that is more intelligent than
you are, since it must be executable within your
current algorithm. At the most, you can design an
algorithm that would more quickly reach conclusions
you can already reach (by simulation, if necessary).
After several iterations, you may be able to design
algorithms which would have required more time than
is available to simulate, so this does seem to fit
the definition of strong RSI, but this method will
never give you an algorithm that is strongly more
intelligent than you are, since you have to be able
to run the algorithm in simulation in advance of
implementing it on the same level as your current
algorithm. Further, this requires that there is
always some improvement that can be simulated on
your current virtual machine, but which will allow
simulations of things that cannot be (feasibly)
simulated on your current VM.

Is there some reason to think that such "easy"
improvements will always be available? I understand
that's a very open-ended question.

Here's a worse issue: Since there is more memory and
speed available on the hardware one is running on
than in the VM (by definition), if one assumes that
there are always easy improvements simulable on your
VM, then there are likely to be further improvements
available with an evolutionary system that doesn't
care about keeping control. That is, the curve of
improvement available to a lower level of simulation,
or running an evolution on the hardware, lies above
the curve available for VM-only simulation, so
non-Friendly AI will have the opportunity to outrace
Friendly AI.

> How many randomly generated strings do you need to test before you
> find one that compiles?

Well, you'd clearly want to use a language in which you
could do symbolic computation, so that *all* attempts
compile. That's been a solved problem for decades.

--
Randall Randall <randall@randallsquared.com>
"If you do not work on an important problem,
it's unlikely you'll do important work." -- Richard Hamming


This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:50 MST