Re: Conservative Estimation of the Economic Impact of Artificial Intelligence

From: Russell Wallace (russell.wallace@gmail.com)
Date: Thu Dec 30 2004 - 09:57:19 MST


On Wed, 29 Dec 2004 18:27:43 -0500, Eliezer Yudkowsky
<sentience@pobox.com> wrote:
> Recursive self-improvement seems to be missing in this discussion. Just a
> band of humans gradually improving an AI that slowly acquires more and more
> abilities. It makes for a nice fantasy of slow, relatively safe
> transcendence where you always have plenty of time to see threats coming
> before they hit.

Recursive self-improvement is a nice idea, but I'm still curious as to
why you believe it can work, even in principle?

Suppose an AI hits on a way to create an improved version of itself,
using heuristic methods (i.e. the same way human engineers do things).
How does it know whether the modified version will in fact be an
improvement? There are only two methods:

- Formal proof. In general, for interesting values of A and B, there
isn't any formal proof that A is better than B, even when it is in
fact better.

- Trial and error. This is how human engineers work. The problem when
you're dealing with self-replicating entities is that this gets you
into open-ended evolution. Maybe this would work, but if so it would
be for evolution's value of "work", which would select for an optimal
self-replicator; it would just turn the universe into copies of
itself. We wouldn't even get a supply of paperclips out of it.

Is there a third possibility that I'm missing?

- Russell



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT