Re: The Future of Human Evolution

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Tue Sep 28 2004 - 10:00:28 MDT


Randall Randall wrote:
>
> I agree that the debate is about the relative order of
> working examples, but I think that the relative dangers
> involved are quite relevant. In particular, while newly
> built nanofactories will certainly allow a brute forcing
> of the AI problem at some point, it seems clear that
> building vehicles sufficient to leave the vicinity will
> be effective immediately (essentially), as that problem
> is well understood, unlike the AI problem. In any case,
> it seems like a simple grid choice to me, where one axis
> is limit on travel (FTL or STL), and the other which
> technology comes to fruition first (MNT or AI). In an
> FTL world, FAI is the only apparent hope of surviving the
> advent of AI. In an STL world, however, MNT can be a
> sufficient technology for surviving unFriendly AI, for
> some. Since we appear to live in an STL world, I prefer
> MNT first.

Suppose you pack your bags and run away at .99c. I know too little to
compute the fraction of UFAIs randomly selected from the class that
meddling dabblers are likely to create, that would run after you at .995c.
  But I guess that the fraction is very high. Why would a paperclip
maximizer do this? Because you might compete with it for paperclip
resources if you escaped. If you have any hope of creating an FAI on board
your fleeing vessel, the future of almost any UFAI that doesn't slip out of
the universe entirely (and those might not present a danger in the first
place) is more secure if it kills you than if it lets you flee. The faster
you run, the less subjective time you have on board the ship before someone
catches up with you, owing to lightspeed effects.

Suppose it doesn't run after you. In that case, if more than one group
escapes, say, 10 groups, then any one of them can also potentially create
an UFAI that will chase after you at .995c.

Suppose only one group escapes. If you have any kind of potential for
growth, any ability to colonize the galaxy and turn into something
interesting, you *still* have to solve the FAI problem before you can do it.

Running away is a good strategy for dealing with bioviruses and military
nanotech. AI rather less so.

I also dispute that you would have .99c-capable escape vehicles
*immediately* after nanotech is developed. It seems likely to me that
years, perhaps a decade or more, would lapse between the development of
absurdly huge nanocomputers and workable escape vehicles. It's not just
the design, it's the debugging. Computers you can tile. Of course
there'll also be a lag between delivery of nanocomputers and when an UFAI
pops out. I merely point out the additional problem.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT