Re: Definition of strong recursive self-improvement

From: Russell Wallace (
Date: Fri Dec 31 2004 - 20:16:41 MST

On Fri, 31 Dec 2004 14:26:44 -0500, Eliezer Yudkowsky
<> wrote:
> I know of no way for a recursive optimizer to change the laws of physics.

Me neither.

> (That doesn't mean no way exists; but it's not necessary.)


> Strong RSI
> means that the part of the process that, structurally speaking, performs
> optimization, is open to restructuring.

Yes, that's what I understood you to mean by it.

> Human beings using their intelligence to directly and successfully modify
> human brains for higher intelligence, would be a very weak example of
> Strong RSI - *if* the resultant enhanced humans were smarter than the
> smartest researchers previously working on human intelligence enhancement.
> Human beings using their intelligence to collect favorable mutations into
> the human genome, or programming DNA directly, for the purpose of producing
> smarter children, would be a weak case of RSI arising (after a hell of a
> long time) from natural selection.

Yes; and both of these would in the long run encounter the same
fundamental problem that a Strong RSI AI must encounter, as far as I
can see. Again, the problem is, how do you know whether a putative
improvement is in fact an improvement? There is no way of finding out
other than by letting it run loose for an open-ended period of time,
in which case you're back into evolution by natural selection. Do you
think you have found a way to solve this problem? If so, how?

- Russell

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT