Re: Definition of strong recursive self-improvement

From: Thomas Buckner (
Date: Fri Dec 31 2004 - 18:21:39 MST

--- Eliezer Yudkowsky <>

> Russell Wallace wrote:
> > On Thu, 30 Dec 2004 19:15:36 -0800 (PST),
> Thomas Buckner
> > <> wrote:
> >
> >>Evolutionary trial-and-error tournaments
> between
> >>subsystems of itself, with winning strategies
> >>globally adopted, but still under control of
> the
> >>singleton.
> >
> > Russell: Well yes, that's what I believe in
> Evolution as a tool under
> > control of something that is not itself
> evolved.
> >
> > Or put another way: A "self-modifying" entity
> must, to produce useful
> > results in the long run, consist of a static
> part that modifies the
> > dynamic part, but is not itself modified in
> the process.
> >
> > (Biological evolution is not a counterexample
> (the static part being
> > the laws of physics + the terrestrial
> environment) nor is human
> > culture (the static part being the laws of
> physics + the terrestrial
> > environment + the human genome).)
> >
> > However, that won't "fold the graph in on
> itself" to make a magic FOOM
> > as Eliezer appears to believe.
(Eliezer replies:)
> To sum up, if an optimizer is capable of
> restructuring the part of itself
> that does the optimizing - capable of changing
> the dynamics that
> distinguish, evaluate, and choose between
> possible plans and designs; and
> if this restructuring is broad enough to permit
> moving between, e.g.,
> optimizations structured like natural selection
> and optimizations
> structured like a mammalian brain; and if the
> restructuring produces
> substantial increases in the power of the
> optimization process, including
> the power to commit further restructurings;
> then I would call that Strong
> Recursive Self-Improvement.
> Please note that this definition excludes
> natural selection, ordinary human
> intelligence, genetic engineering that does not
> produce new researchers who
> are better at genetic engineering, memetic
> 'evolution' that can't modify
> human brains to produce enhanced humans who are
> better at modifying human
> brains, the progress of the global economy, and
> many other weak little
> optimization processes commonly offered up as
> precedent for the literally
> unimaginable power of a superintelligence.

I do not assert that the optimizer should not
optimize its own optimization; it will. But the
more it does so, the less time we get to affect
the outcome. If it aggressively recurses
optimization, the FOOM may not exactly be magic,
but it will sure look that way to us. That's
precisely why I reference the irrelevant banker
in the Sturgeon story, re: investment potential.
Very few MBA's understand the obviating potential
of SAI, and if I were an investor looking to put
major capital into such research, I'd drop the
lion's share on SIAI, consider it a write-off,
and spend the rest on pursuing pleasure
activities on the chance that I might not have
much time left to enjoy myself before Whatever

Tom Buckner

Do you Yahoo!?
Jazz up your holiday email with celebrity designs. Learn more.

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT