Definition of strong recursive self-improvement

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Fri Dec 31 2004 - 12:26:44 MST


Russell Wallace wrote:
> On Thu, 30 Dec 2004 19:15:36 -0800 (PST), Thomas Buckner
> <tcbevolver@yahoo.com> wrote:
>
>>Evolutionary trial-and-error tournaments between
>>subsystems of itself, with winning strategies
>>globally adopted, but still under control of the
>>singleton.
>
> Well yes, that's what I believe in too. Evolution as a tool under
> control of something that is not itself evolved.
>
> Or put another way: A "self-modifying" entity must, to produce useful
> results in the long run, consist of a static part that modifies the
> dynamic part, but is not itself modified in the process.
>
> (Biological evolution is not a counterexample (the static part being
> the laws of physics + the terrestrial environment) nor is human
> culture (the static part being the laws of physics + the terrestrial
> environment + the human genome).)
>
> However, that won't "fold the graph in on itself" to make a magic FOOM
> as Eliezer appears to believe.
>
> As I understand him to mean it, "recursive self-improvement" means
> modifying the whole stack. That's the part I don't believe in; more to
> the point, that's the part that would have to work in order for a
> "hard takeoff" scenario to be realistic.
>
> I don't think that can happen, but if it could, it would make a
> difference to rational planning at the present time, which is why I'm
> asking whether there's a reason to believe it could.

I know of no way for a recursive optimizer to change the laws of physics.
(That doesn't mean no way exists; but it's not necessary.) Strong RSI
means that the part of the process that, structurally speaking, performs
optimization, is open to restructuring.

When you look at a system on a level of "the laws of physics", then the
laws of physics, as such, are not structurally responsible for optimizing
either genomes (in the case of natural selection) or thoughts (in the case
of humans). In both cases, we can model the optimization on a higher level
without directly modeling the laws of physics. Now this doesn't mean the
optimization takes place outside the laws of physics. What it does mean is
that the optimizer can be fully recursive within the laws of physics,
because we don't have to modify the laws of physics to modify the structure
embedded within physics that performs optimization.

Let's say we start with a coin that might be either heads or tails. A
human being looks at this coin. If the coin is heads, the human being does
nothing. If the coin is tails, the human being performs a FLIP operation.
  After this is carried out, the coin becomes heads regardless of its
initial condition. This is an optimization process (albeit a very weak and
uninteresting one), compressing the future so that the coin ends up heads
regardless of its initial condition. If an AI were unleashed upon the
problem with a utility function of tails=0 heads=1, then arbitrarily great
efforts might go into finding and flipping the coin, up to the limit of the
AI's intelligence. Actions would be chosen on the basis of whether they
were predicted to lead to a coin in state HEADS. If the AI were smart
enough, it might assemble a ship and send it to a distant galaxy to find
the coin, because that was the action predicted to lead to state HEADS with
the highest probability.

The important thing to note is that it is inconvenient to regard "the laws
of physics" as flipping the coin, because if you abstract the laws of
physics from the particular initial configuration of the universe, and
examine the laws as such, then the laws do not say that the coin must go to
HEADS. With alternate initial conditions, the universe might contain an
optimizer that steers the future to TAILS. So the laws of physics,
although unmodifiable as far as we know, are not structurally responsible
for the optimization. That's why an optimizer can reach around and modify
itself, reasoning that if the optimizer has a different form it will be
more efficient in solving the problem of choosing actions that lead to
HEADS and therefore the self-modification probably leads to HEADS.

Human beings using their intelligence to directly and successfully modify
human brains for higher intelligence, would be a very weak example of
Strong RSI - *if* the resultant enhanced humans were smarter than the
smartest researchers previously working on human intelligence enhancement.

Human beings using their intelligence to collect favorable mutations into
the human genome, or programming DNA directly, for the purpose of producing
smarter children, would be a weak case of RSI arising (after a hell of a
long time) from natural selection. That is, natural selection is an
optimization process that produces optimizers very unlike itself (vehicles
for genes with their own nervous systems and built-in goals), but these
secondary optimizers don't change the structure of the primary optimization
process; they don't use their intelligence to modify DNA. Eventually one
of these optimizers became capable of reaching back and transcending the
optimization process of natural selection, choosing genes on a criterion
other than reproductive efficiency of the vehicles constructed by the
genes. But it's not Strong RSI until the first genetically modified
supergenius is born who is better at genetic engineering than the previous
researchers. And by that time, some existing supergenius like myself will
have long since built a Strong RSI that doesn't pass through the bottleneck
of 200Hz neurons.

To sum up, if an optimizer is capable of restructuring the part of itself
that does the optimizing - capable of changing the dynamics that
distinguish, evaluate, and choose between possible plans and designs; and
if this restructuring is broad enough to permit moving between, e.g.,
optimizations structured like natural selection and optimizations
structured like a mammalian brain; and if the restructuring produces
substantial increases in the power of the optimization process, including
the power to commit further restructurings; then I would call that Strong
Recursive Self-Improvement.

Please note that this definition excludes natural selection, ordinary human
intelligence, genetic engineering that does not produce new researchers who
are better at genetic engineering, memetic 'evolution' that can't modify
human brains to produce enhanced humans who are better at modifying human
brains, the progress of the global economy, and many other weak little
optimization processes commonly offered up as precedent for the literally
unimaginable power of a superintelligence.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:50 MST