Re: Definition of strong recursive self-improvement

From: Russell Wallace (
Date: Sun Jan 02 2005 - 16:23:25 MST

On Sun, 02 Jan 2005 15:28:41 -0600, Eliezer S. Yudkowsky
<> wrote:
> There are specific things about how humans write code that I do not
> presently understand, even as to matters of fundamental principle. If I
> had never seen humans write code, I wouldn't know to expect that they
> could.

Just to check, do you mean:

a) That if you had seen humans plan journeys, construction projects,
military campaigns etc, you wouldn't know to expect that they could
also write code on the grounds that a program and a plan are the same
sort of thing wearing different hats,


b) That you wouldn't know to expect that humans would be capable of
any sort of effective planning in a world where perfect planning is

(If you mean a) I disagree with you, if you mean b) I agree.)

> I'm sorry if this seems harsh, but, you have read the page, you know the
> rules. "Semi-formal reasoning" is not an answer. You have to say what
> specifically are the dynamics of semi-formal reasoning, why it works to
> describe the universe reliably enough to permit (at least) the observed
> level of human competence in writing code... the phrase "semi-formal"
> reasoning doesn't tell me what kind of code humans write, or even what
> kind of code humans do not write. I'm not trying to annoy you, this is
> a generally strict standard that I try to apply.

No problem, I'm just trying to point out that if either of us were in
a position to answer that question, this conversation wouldn't be
necessary in the first place. I'm in the position of a Renaissance
alchemist telling a colleague "I think life works because it's made of
zillions of clockwork-like components that are themselves made of
atoms, rather because of vital force" and getting the reply "That
still doesn't count as a Technical Explanation" - in a sense the reply
is correct, but the explanation is still the best available at the
current time.

> How humans write code
> is not something that you have answered me, nor have you explained why
> the phrase "semi-formal" excepts your previous impossibility argument.
> Should not semi-formal reasoning be even less effective than formal
> reasoning? Unless it contains some additional component, not present in
> formal reasoning, that works well and reliably - perhaps not perfectly,
> but still delivering reliably better performance than random numbers,
> while not being the same as a formal proof.

The additional component is the ability to ignore the indefinitely
large set of things that in principle could matter but in practice
probably don't in the particular context, in favor of the smallish set
of things that are likely to matter, which makes a problem tractable,
while forgoing certainty. (And no, I don't at this time have a
technical explanation of _how_ this additional component works.)

> Can we not calibrate such a
> system according to its strength? Can we not wring from it a calibrated
> probability of 99%?

99% probability of success _at a particular, specified task_, maybe
so. Self-improvement isn't a particular, specified task though.

> I am still trying to figure out the answers myself. What I do not
> understand is your confidence that there is no answer.

Well, I originally had the impression you believed it would be
possible to create a seed AI which:

- Would provably undergo hard takeoff (running on a supercomputer in a basement)
- Or else, would provably have e.g. a 99% probability of doing so

I'm confident both of these are self-evidently wrong; the things we're
dealing with here are simply not in the domain of formal proof.

Do I now understand correctly that your position is a slightly weaker
one: it would be possible to create a seed AI which:

- In fact has a 99% chance of undergoing a hard takeoff, even though
we can't mathematically prove it has?

If so, then I'm still inclined to think this is incorrect, but I'm not
as confident. My intuition says each step might have a 99% chance of
being successfully taken, but the overall process of hard takeoff
would be .99^N; I gather your intuition says otherwise.

> My studies so
> far indicate that humans do these things very poorly

Compared to what standard?

> yet because we can
> try, there must be some component of our effort that works, that
> reflects Bayes-structure or logic-structure or *something*. At the
> least it should be possible to obtain huge performance increases over
> humans.

Bear in mind that for any evidence we have to the contrary, human
ability at strongly recursive self-improvement is zero.

> Why should a system that works probabilistically, not be refinable to
> yield very low failure probabilities? Or at least I may hope.

I hope so too, but the refining has to be done by something other than
the system itself.

> But at least a volition-extrapolating FAI would refract through humans
> on the way to deciding which options our world will offer us, unlike
> natural selection or the uncaring universe.

There may be something to be said for that idea, if it can actually be
made to work.

- Russell

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT