Re: Definition of strong recursive self-improvement

From: maru (marudubshinki@gmail.com)
Date: Sun Jan 02 2005 - 06:53:41 MST


So, in other words, you merely want an recursive optimizer that can code
'well enough' until it has leveraged weak superintelligence enough to
solve that problem somehow?
~Maru

Eliezer S. Yudkowsky wrote:

> ....
> I only partially understand - I am presently working on understanding
> - how humans write code without either simulating every step of every
> possible run of the program, nor employing contemporary slow
> theorem-proving techniques. Nonetheless it is evident that we write
> code. Your proof against recursive self-improvement, which denies
> even the first step of writing a single line of functioning code, is
> equally strong against the existence of human programmers.
>
> I intend to comprehend how it is theoretically possible that humans
> should write code, and then come up with a deterministic or
> calibrated-very-high-probability way of doing "the same thing" or
> better. It is not logically necessary that this be possible, but I
> expect it to be possible.
>



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:50 MST