Re: Definition of strong recursive self-improvement

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Jan 01 2005 - 21:48:31 MST


Randall, Russell:

It seems to me that you have just proved that Marcus Hutter's AIXI can
be no smarter than a human, when AIXI could tear apart a human like
tinfoil. We can specify computations which no human mind nor physically
realizable computer can run, yet which, if they were computed, would
rule the universe.

I only partially understand - I am presently working on understanding -
how humans write code without either simulating every step of every
possible run of the program, nor employing contemporary slow
theorem-proving techniques. Nonetheless it is evident that we write
code. Your proof against recursive self-improvement, which denies even
the first step of writing a single line of functioning code, is equally
strong against the existence of human programmers.

I intend to comprehend how it is theoretically possible that humans
should write code, and then come up with a deterministic or
calibrated-very-high-probability way of doing "the same thing" or
better. It is not logically necessary that this be possible, but I
expect it to be possible.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT