RE: Definition of strong recursive self-improvement

From: Billy Brown (bbrown@transcient.com)
Date: Mon Jan 03 2005 - 11:28:35 MST


I think you (and some other posters on this thread) are implicitly assuming
here that "intelligence" is a single monolithic entity, which can only be
improved through a wholesale overhaul that changes every part of the AI at
once. This is actually a very unlikely scenario.

More plausible is an AI in which "intelligence" is the output of a very
large system containing many interacting subsystems, with each subsystem
containing a great deal of internal complexity. The performance of each
subsystem can be described in much less nebulous terms (data retrieval
speeds, reasoning speeds, success rates of internal algorithms at various
micro-tasks, and so on).

>From this perspective there are many ways to improve the overall performance
of the AI that do not run afoul of your objections. On a low level you can
tune internal heuristics of individual subsystems, do performance tuning,
add local enhancements to a module's capabilities, etc. On a higher level
you can better allocate resources between subsystems, improve the interfaces
between them, write new subsystems to deal with new types of tasks, and so
on. You will occasionally need to port the whole system to a new
architecture to solve specific problems or provide new capabilities, but
even then the analysis required to validate the change can stay at the level
of system internals rather than getting stuck in the squishy semantics of
"smarter" and "better".

Billy Brown

> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf Of Russell
> Wallace
> Sent: Sunday, January 02, 2005 5:23 PM
> To: sl4@sl4.org
> Subject: Re: Definition of strong recursive self-improvement
>
> On Sun, 02 Jan 2005 15:28:41 -0600, Eliezer S. Yudkowsky
> <sentience@pobox.com> wrote:
> > There are specific things about how humans write code that I do not
> > presently understand, even as to matters of fundamental principle. If I
> > had never seen humans write code, I wouldn't know to expect that they
> > could.
>
> Just to check, do you mean:
>
> a) That if you had seen humans plan journeys, construction projects,
> military campaigns etc, you wouldn't know to expect that they could
> also write code on the grounds that a program and a plan are the same
> sort of thing wearing different hats,
>
> or
>
> b) That you wouldn't know to expect that humans would be capable of
> any sort of effective planning in a world where perfect planning is
> impossible?
>
> (If you mean a) I disagree with you, if you mean b) I agree.)
>
> > I'm sorry if this seems harsh, but, you have read the page, you know the
> > rules. "Semi-formal reasoning" is not an answer. You have to say what
> > specifically are the dynamics of semi-formal reasoning, why it works to
> > describe the universe reliably enough to permit (at least) the observed
> > level of human competence in writing code... the phrase "semi-formal"
> > reasoning doesn't tell me what kind of code humans write, or even what
> > kind of code humans do not write. I'm not trying to annoy you, this is
> > a generally strict standard that I try to apply.
>
> No problem, I'm just trying to point out that if either of us were in
> a position to answer that question, this conversation wouldn't be
> necessary in the first place. I'm in the position of a Renaissance
> alchemist telling a colleague "I think life works because it's made of
> zillions of clockwork-like components that are themselves made of
> atoms, rather because of vital force" and getting the reply "That
> still doesn't count as a Technical Explanation" - in a sense the reply
> is correct, but the explanation is still the best available at the
> current time.
>
> > How humans write code
> > is not something that you have answered me, nor have you explained why
> > the phrase "semi-formal" excepts your previous impossibility argument.
> > Should not semi-formal reasoning be even less effective than formal
> > reasoning? Unless it contains some additional component, not present in
> > formal reasoning, that works well and reliably - perhaps not perfectly,
> > but still delivering reliably better performance than random numbers,
> > while not being the same as a formal proof.
>
> The additional component is the ability to ignore the indefinitely
> large set of things that in principle could matter but in practice
> probably don't in the particular context, in favor of the smallish set
> of things that are likely to matter, which makes a problem tractable,
> while forgoing certainty. (And no, I don't at this time have a
> technical explanation of _how_ this additional component works.)
>
> > Can we not calibrate such a
> > system according to its strength? Can we not wring from it a calibrated
> > probability of 99%?
>
> 99% probability of success _at a particular, specified task_, maybe
> so. Self-improvement isn't a particular, specified task though.
>
> > I am still trying to figure out the answers myself. What I do not
> > understand is your confidence that there is no answer.
>
> Well, I originally had the impression you believed it would be
> possible to create a seed AI which:
>
> - Would provably undergo hard takeoff (running on a supercomputer in a
> basement)
> - Or else, would provably have e.g. a 99% probability of doing so
>
> I'm confident both of these are self-evidently wrong; the things we're
> dealing with here are simply not in the domain of formal proof.
>
> Do I now understand correctly that your position is a slightly weaker
> one: it would be possible to create a seed AI which:
>
> - In fact has a 99% chance of undergoing a hard takeoff, even though
> we can't mathematically prove it has?
>
> If so, then I'm still inclined to think this is incorrect, but I'm not
> as confident. My intuition says each step might have a 99% chance of
> being successfully taken, but the overall process of hard takeoff
> would be .99^N; I gather your intuition says otherwise.
>
> > My studies so
> > far indicate that humans do these things very poorly
>
> Compared to what standard?
>
> > yet because we can
> > try, there must be some component of our effort that works, that
> > reflects Bayes-structure or logic-structure or *something*. At the
> > least it should be possible to obtain huge performance increases over
> > humans.
>
> Bear in mind that for any evidence we have to the contrary, human
> ability at strongly recursive self-improvement is zero.
>
> > Why should a system that works probabilistically, not be refinable to
> > yield very low failure probabilities? Or at least I may hope.
>
> I hope so too, but the refining has to be done by something other than
> the system itself.
>
> > But at least a volition-extrapolating FAI would refract through humans
> > on the way to deciding which options our world will offer us, unlike
> > natural selection or the uncaring universe.
>
> There may be something to be said for that idea, if it can actually be
> made to work.
>
> - Russell



This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:51 MST