Re: Definition of strong recursive self-improvement

From: Eliezer S. Yudkowsky (
Date: Sun Jan 02 2005 - 14:28:41 MST

Russell Wallace wrote:
> On Sun, 02 Jan 2005 11:38:54 -0600, Eliezer S. Yudkowsky
> <> wrote:
>>But if I knew how to build an FAI that worked so long as no one tossed
>>its CPU into a bowl of ice cream, I would count myself as having made
>>major progress.
> Yes, I think it's safe to say that would qualify as progress alright.
> Do you still believe in the "hard takeoff in a basement" scenario, though?

Leaving aside the choice of verbs, yes, I still guess that.

>>Meanwhile, saying that humans use "semi-formal reasoning" to write code
>>is not, I'm afraid, a Technical Explanation.
> No, really? I'm shocked :) (Good article that, btw.)
> If either of us were at the point of being able to provide a Technical
> Explanation for this stuff, this conversation would be taking a very
> different form. (For one thing, the side that had it could probably
> let their AI do a lot of the debating for them!) But my semi-technical
> explanation does answer the question you asked, which is how _in
> principle_ it can be possible for human programmers to ever write
> working code;

No, your answer is at best an *argument that* in principle it is
possible for human programmers to write code - and if we did not have
the example before our eyes, the argument wouldn't convince.

There are specific things about how humans write code that I do not
presently understand, even as to matters of fundamental principle. If I
had never seen humans write code, I wouldn't know to expect that they
could. I have read your answer and my questions, even the fundamental
questions, are still unsolved to me. So either I missed something in
your response, or it doesn't count as an explanation.

I'm sorry if this seems harsh, but, you have read the page, you know the
rules. "Semi-formal reasoning" is not an answer. You have to say what
specifically are the dynamics of semi-formal reasoning, why it works to
describe the universe reliably enough to permit (at least) the observed
level of human competence in writing code... the phrase "semi-formal"
reasoning doesn't tell me what kind of code humans write, or even what
kind of code humans do not write. I'm not trying to annoy you, this is
a generally strict standard that I try to apply. How humans write code
is not something that you have answered me, nor have you explained why
the phrase "semi-formal" excepts your previous impossibility argument.
Should not semi-formal reasoning be even less effective than formal
reasoning? Unless it contains some additional component, not present in
formal reasoning, that works well and reliably - perhaps not perfectly,
but still delivering reliably better performance than random numbers,
while not being the same as a formal proof. Can we not calibrate such a
system according to its strength? Can we not wring from it a calibrated
probability of 99%?

> and it therefore suffices to answer your objection that
> if I was right about the problems, there could be no such thing even
> in principle.
>>Imagine someone who knew
>>naught of Bayes, pointing to probabilistic reasoning and saying it was
>>all "guessing" and therefore would inevitably fail at one point or
>>another. In that vague and verbal model you could not express the
>>notion of a more reliable, better-discriminating probabilistic guesser,
>>powered by Bayesian principles and a better implementation, that could
>>achieve a calibrated probability of 0.0001% for the failure of an entire
>>system over, say, ten millennia.

> How do you get a calibrated probability of failure, or even calculate
> P(E|H) for a few H's, in a situation where calculating P(E|H) for one
> H would take computer time measured in teraexaflop-eons, and plenty of
> them?
> (These are not rhetorical questions. I'm asking them because answers
> would be of great practical value.)

I am still trying to figure out the answers myself. What I do not
understand is your confidence that there is no answer. My studies so
far indicate that humans do these things very poorly; yet because we can
try, there must be some component of our effort that works, that
reflects Bayes-structure or logic-structure or *something*. At the
least it should be possible to obtain huge performance increases over

Why should a system that works probabilistically, not be refinable to
yield very low failure probabilities? Or at least I may hope.

>>(For I do now regard FAI as an interim
>>measure, to be replaced by some other System when humans have grown up a
> So you want to take humans out of the loop for awhile, then put them
> back in after a few millennia? (Whereas I'm inclined to think humans
> will need to stay in the loop all the way along.)

Humans *never were* in the loop on most questions, like what kind of
brain designs humans should have, or what kind of environment and
environmental rules we exist in, or what kind of decisions humans should
make. We were each born into a world we did not design, decided for us
by alien forces like natural selection. I am not so proud of my human
stupidity as to think that handing foundational decisions to modern-day
humans would accomplish anything but death by unintended consequences.
But at least a volition-extrapolating FAI would refract through humans
on the way to deciding which options our world will offer us, unlike
natural selection or the uncaring universe.

Someday we will become wise enough to understand which decisions we dare
make, instead of indignantly rejecting any suggestion that we are not so
wise. At that point, perhaps, I will have earned the right to choose
whether to choose. Yet I might wait a thousand subjective years after
that threshold before there was *not* too great a danger. If I want to
live to be a billion years old, I'd best not take that sort of risk when
I'm still young.

Eliezer S. Yudkowsky                
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT