Re: Conservative Estimation of the Economic Impact of Artificial Intelligence

From: Thomas Buckner (
Date: Thu Dec 30 2004 - 20:15:36 MST

--- Russell Wallace <>

> On Wed, 29 Dec 2004 18:27:43 -0500, Eliezer
> Yudkowsky
> <> wrote:
> > Recursive self-improvement seems to be
> missing in this discussion. Just a
> > band of humans gradually improving an AI that
> slowly acquires more and more
> > abilities. It makes for a nice fantasy of
> slow, relatively safe
> > transcendence where you always have plenty of
> time to see threats coming
> > before they hit.
> Recursive self-improvement is a nice idea, but
> I'm still curious as to
> why you believe it can work, even in principle?
Because it's been demonstrated in human society
for quite some time! (Individual humans
themselves may not have gotten radically smarter
in the last 10K years, but human society's
information-processing and power over the world
have been on an exponential growth curve the
whole time).
> Suppose an AI hits on a way to create an
> improved version of itself,
> using heuristic methods (i.e. the same way
> human engineers do things).
> How does it know whether the modified version
> will in fact be an
> improvement? There are only two methods:
> - Formal proof. In general, for interesting
> values of A and B, there
> isn't any formal proof that A is better than B,
> even when it is in
> fact better.
> - Trial and error. This is how human engineers
> work. The problem when
> you're dealing with self-replicating entities
> is that this gets you
> into open-ended evolution. Maybe this would
> work, but if so it would
> be for evolution's value of "work", which would
> select for an optimal
> self-replicator; it would just turn the
> universe into copies of
> itself. We wouldn't even get a supply of
> paperclips out of it.
> Is there a third possibility that I'm missing?
> - Russell
Evolutionary trial-and-error tournaments between
subsystems of itself, with winning strategies
globally adopted, but still under control of the
singleton. The SAI 'wants' to optimize
information-processing power, not
self-replication per se. I have never been
convinced that a SAI would want paperclips any
more than we do. It might want something we can't
understand, but if it merely creates endless
copies of something essentially stupid, then it
is stupid also, even if it is clever about
achieving a stupid goal. On the other hand, if I
saw a SAI on a computronium-creating binge, I
would proceed on the assumption that it had a
good reason I couldn't grasp.

The whole biz-school angle in the original post
puts me in mind of Theodore Sturgeon's novella
Microcosmic God (see google search:
A scientist named Kidder partners with a banker
named Conant. Kidder, to accelerate his research,
creates the Neoterics, tiny creatures who live
fast, die fast, and develop a culture of their
own so fast that they soon outstrip the humans.
Conant makes more money than Croesus off Kidder's
inventions, but it is Kidder who has the
Neoterics' allegiance, and they throw up an
impenetrable dome over his island before the Air
Force can bomb it. The story ends with the
enigmatic dome, and the knowledge that they will
come out sometime soon...

Tom Buckner

Do you Yahoo!?
The all-new My Yahoo! - Get yours free!

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT