Re: Seed AI milestones (was: Microsoft aflare)

From: Eliezer S. Yudkowsky (
Date: Wed Feb 27 2002 - 09:02:14 MST

Ben Goertzel wrote:
> A) paths that begin with unintelligent self-modification
> B) paths that begin with purposeful intelligent non-self-modifying behavior
> C) paths that begin with a mixture of self-modification and purposeful
> intelligent behavior
> Eli and I, at this point, seem to share the intuition that B is the right
> approach. I have been clear on this for a while, but Eli's recent e-mail
> is the first time I've heard him clearly agree with me on this.

I suspect that's because you and I use the terms "hard takeoff" and "seed
AI" to refer to different phases of the AI's development. To be precise:

*** (Excerpt from a work in progress, may contain terms here undefined.)

Epochs for holonic programming:

        First epoch: The AI can transform code in ways that do
        not affect the algorithm implemented. ("Understanding"
        on the order of an optimizing compiler.)

        Second epoch: The AI can transform algorithms in order
        to fit simple abstract beliefs about the design purposes
        of code. That is, the AI would understand what a stack
        implemented as a linked list and a stack implemented as
        an array have in common. (Note that this is already out
        of range of current AI.)

        Third epoch: The AI can draw a holonic line from simple
        internal metrics of cognitive usefulness (how fast a
        concept is cued, the usefulness of the concept returned)
        to specific algorithms. Consequently the AI would have
        the theoretical capability to invent and test new
        algorithms. This does not necessarily mean the AI would
        have the ability to invent good algorithms or better
        algorithms, just that invention in this domain would
        become possible. (A theoretical capacity for invention
        does not inherently imply improvement over and above the
        inventions of the programmers. This is determined by
        relative domain competency and by relative effort
        expended at a given focal point.)

        Fourth epoch: The AI has a concept of "intelligence" as
        the top-level product of a continuous holonic system.
        The AI can draw a continuous holonic line from (a) its
        understanding of intelligence to (b) its understanding of
        cognitive subsystems and cognitive content to (c) its
        understanding of source code and stored data. Given a
        sufficiently complete understanding of the higher-level
        purpose of a cognitive subsystem, the AI would be able to
        design a new subsystem within the overall architecture.
        (Again, this does not intrinsically imply improvement.)

        Fifth epoch: The AI understands almost all of the design
        purposes of its lower and higher levels of organization.
        The AI would have the ability to design new cognitive

        Sixth epoch: The AI's understanding of itself, and the
        AI's understanding of intelligence, matches or surpasses
        that of the human programmers.

    Epochs for sparse and continuous self-improvement:

        First epoch: The AI has a limited set of rigid routines
        which it applies uniformly. Once these routines are used
        up, they are gone. This is essentially analogous to the
        externally driven improvement of an optimizing compiler.
        An optimizing compiler may make a large number of
        "improvements", but they are not self-improvements, and
        they are not design improvements.

        Second epoch: The cognitive processes which create
        improvements have characteristic complexity on the order
        of Blue Gene, rather than on the order of an optimizing
        compiler. Sufficient investments of computing power can
        sometimes yield extra design improvements, beyond the
        default operations, but it is essentially an exponential
        investment for a linear improvement, and no matter how
        much computing power is invested, the total number of
        improvements conceivable are limited. (I would identify
        this as EURISKO's epoch.)

        Third epoch: Cognitive complexity in the AI's domain
        competency for programming is high enough that at any
        given point there is a large number of visible
        possibilities for improvement, albeit minor
        improvements. The AI typically does not completely
        exhaust a given supply of opportunities before
        discovering new ones. However, it is only
        programmer-driven improvements in intelligence which are
        large enough to make new opportunities for
        self-improvement visible.

        Fourth epoch: Internal improvements sometimes result in
        genuine improvements to "smartness", "creativity", or
        "holonic understanding", enough to make new possible
        improvements visible. AI-driven acquisition of domain
        expertise - independent learning - may also be powerful
        enough to "increase the opportunity supply" or "survey a
        new portion of the self-improvement landscape".

        Fifth epoch: Self-improvement is, theoretically,
        open-ended. Even in the complete absence of the human
        programmers, by the time the AI had used up all the
        improvements visible at a given level, that amount of
        improvement would be enough to "climb the next step of
        the ladder" and make a new set of improvements visible.

        Sixth epoch: The AI does not "use up all the
        improvements visible at a given level". Taking only a
        small subset of the immediately obvious opportunities is
        enough for the AI to climb the next step of the ladder,
        survey a new portion of the self-improvement landscape,
        and start over in a new space of possible improvements.

    Epochs for human-dominated and AI-dominated improvement:

        First epoch: The AI can make optimizations at most on the
        order of an optimizing compiler, and cannot make design
        improvements or increase functional complexity. The
        combination of AI and programmer is not noticeably better
        than a programmer armed with an ordinary optimizing

        Second epoch: The AI can understand a small handful of
        components and make improvements to them, but the total
        amount of AI-driven improvement is small by comparison
        with programmer-driven development. However,
        sufficiently major programmer improvements do very
        occasionally trigger secondary improvement. The total
        amount of work done by the AI serves only as a
        measurement of progress and does not significantly
        accelerate work on the AI.

        Third epoch: AI-driven improvement is significant, but
        development is "strongly" programmer-dominated in the
        sense that overall systemic progress is driven almost
        entirely by the creativity of the programmers. The AI
        may have taken over some significant portion of the work
        from the programmers. The AI's domain competencies for
        programming and the deliberative manipulation of
        cognitive content may be critical to the AI's continued

        Fourth epoch: AI-driven improvement is significant, but
        development is "weakly" programmer-dominated. AI-driven
        improvements and programmer-driven improvements are
        roughly of the same order, but the programmers are better
        at it. Alternatively, the programmers have more
        subjective time in which to make improvements, due to the
        number of programmers or the slowness of the AI.

        Fifth epoch: AI-driven improvement is roughly equal to
        the amount of programmer-driven improvement.

        Sixth epoch: AI-driven improvement significantly
        outweighs programmer-driven improvement.

        Seventh epoch: Programmer-driven improvement is

    Epochs for overall intelligence:

        Tool-level AI: The AI's behaviors are immediately and
        directly specified by the programmers, or the AI "learns"
        in a single domain using prespecified learning

        Prehuman AI: The AI's intelligence is not a significant
        subset of human intelligence. Nonetheless, the AI is a
        cognitive supersystem, with some subsystems we would
        recognize, and at least some mind-like behaviors. (A
        toaster oven does not qualify as a "prehuman chef"; a
        general kitchen robot might do so.)

        Infrahuman AI: The AI's intelligence is, overall, of the
        same basic character as human intelligence, but
        substantially inferior. The AI may excel in a few
        domains where it possesses new sensory modalities or
        other brainware advantages not available to humans.
        Humans talking to the AI usually recognize a mind on the
        other end. (An AI that lacks the ability to communicate
        and model external minds does not yet qualify as

        Near-human AI, human-equivalent AI: The AI's
        intelligence is in the rough neigborhood of a human's.
        It may be locally inferior or superior in various
        domains, but general intelligence, reasoning ability, and
        learning ability are roughly that of a human.


*** (/excerpt)

Note that these are *epochs*, not *milestones*. They describe progress over
very long periods.

Anyway, Ben uses the term "hard takeoff" to refer to what I would describe
as the first or second epochs. I use "hard takeoff" in the sense that I
believe is standard in the transhumanist community, to refer to events past
the fifth or sixth epochs in various categories. This would seem to explain
Ben's belief that I "underestimate" the amount of work involved in "getting
to the Singularity from a hard takeoff".

-- -- -- -- --
Eliezer S. Yudkowsky
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT