RE: Seed AI milestones (was: Microsoft aflare)

From: Ben Goertzel (ben@goertzel.org)
Date: Wed Feb 27 2002 - 08:36:08 MST


Of course, Eli is right.

First of all, "robust (i.e. not dying an instant death) modification of
its own code base" is not really the goal. Tierra demonstrates that.
A Tierra organism modifies its own code. Sure, this code is in a peculiar
high-level language, but so what? The goal is self-modification that is
purposefully oriented toward improved general intelligence. A rather
loftier goal than non-death-causing self-modification, and one that no
system has yet achieved.

But obviously, "self-modification that is purposefully oriented toward
improved
general intelligence" is not a viable *first milestone*. Rather, it's an
N'th
milestone where N is probably in the range 3-20, depending on one's
approach.

Out of all the paths by which one *could* work toward the goal of
"self-modification
that is purposefully oriented toward improved general intelligence", one can
imagine

A) paths that begin with unintelligent self-modification
B) paths that begin with purposeful intelligent non-self-modifying behavior
C) paths that begin with a mixture of self-modification and purposeful
intelligent
                behavior

Eli and I, at this point, seem to share the intuition that B is the right
approach.
I have been clear on this for a while, but Eli's recent e-mail is the first
time
I've heard him clearly agree with me on this.

Eugene, if your intuition is A, that's fine. In this case something like
Tierra
(which demonstrates robust self-modification, not leading to instant death)
may
be viewed as a step toward seed AI. However, the case of Tierra is a mild
counterargument to the A route, because its robust self-modification seems
to be
inadequately generative -- i.e. like all other Alife systems so far, it
yields
a certain amount of complexity and then develops no further.

-- Ben G

> -----Original Message-----
> From: owner-sl4@sysopmind.com [mailto:owner-sl4@sysopmind.com]On Behalf
> Of Eliezer S. Yudkowsky
> Sent: Wednesday, February 27, 2002 6:48 AM
> To: sl4@sysopmind.com
> Subject: Seed AI milestones (was: Microsoft aflare)
>
>
> Eugene Leitl wrote:
> >
> > Your first fielded alpha must demonstrate robust (i.e. not dying an
> > instant death) modification of its own code base as a first milestone.
>
> Uh, not true. A seed AI is fundamentally built around general
> intelligence,
> with self-improvement an application of that intelligence. It
> may also use
> various functions and applications of high-level intelligence as low level
> glue, which is an application closed to humans, but that doesn't
> necessarily
> imply robust modification of the low-level code base; it need only imply
> robust modification of any of the cognitive structures that would
> ordinarily
> be modified by a brainware system.
>
> The milestones for general intelligence and for self-modification are
> independent tracks - though, of course, not at all independent in
> any actual
> sense - and my current take is that the first few GI milestones are likely
> to be achieved before the first code-understanding milestone.
>
> It's possible, though, that I may have misunderstood your meaning, since I
> don't know what you meant by "first fielded alpha". You don't "field" a
> seed AI, you tend its quiet growth.
>
> -- -- -- -- --
> Eliezer S. Yudkowsky http://intelligence.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT