Re: The Future of Human Evolution

From: Randall Randall (randall@randallsquared.com)
Date: Wed Sep 29 2004 - 01:36:12 MDT


On Sep 28, 2004, at 12:00 PM, Eliezer Yudkowsky wrote:
> Suppose you pack your bags and run away at .99c. I know too little to
> compute the fraction of UFAIs randomly selected from the class that
> meddling dabblers are likely to create, that would run after you at
> .995c. But I guess that the fraction is very high. Why would a
> paperclip maximizer do this? Because you might compete with it for
> paperclip resources if you escaped. If you have any hope of creating
> an FAI on board your fleeing vessel, the future of almost any UFAI
> that doesn't slip out of the universe entirely (and those might not
> present a danger in the first place) is more secure if it kills you
> than if it lets you flee. The faster you run, the less subjective
> time you have on board the ship before someone catches up with you,
> owing to lightspeed effects.

This assumes that there is a significant chance of being able to
find and kill all escapees. Given the apparent ease of blending
in with the cosmic background over a medium-size angle (say, one
radian), the chances of finding all escapees seem quite slim. A
maximizer of whatever is therefore only going to increase the
chances of blowback by killing many or most escapees. This game
has interesting parallels with current international games.

> Suppose it doesn't run after you. In that case, if more than one
> group escapes, say, 10 groups, then any one of them can also
> potentially create an UFAI that will chase after you at .995c.

But UFAIs created light years away are not huge threats, since
they need to know you're there, and that information would seem
scarce.

> Suppose only one group escapes. If you have any kind of potential for
> growth, any ability to colonize the galaxy and turn into something
> interesting, you *still* have to solve the FAI problem before you can
> do it.

Given a graphic example of the dangers of AI development, an
escapee group would probably pursue other approaches, such as
upload enhancement, which can, at least, start with a known
ethical upload (oneself, in the limit).

> Running away is a good strategy for dealing with bioviruses and
> military nanotech. AI rather less so.
>
> I also dispute that you would have .99c-capable escape vehicles
> *immediately* after nanotech is developed. It seems likely to me that
> years, perhaps a decade or more, would lapse between the development
> of absurdly huge nanocomputers and workable escape vehicles.

I actually was only thinking .10c vehicles. Get beyond the
mass-dense part of the solar area and coast, radiating waste
heat forward as much as possible. If we pause in the Oort
or asteroid belt, producing a few million decoys per actual
escape vehicle seems well within the capability of relatively
dumb software.

> It's not just the design, it's the debugging. Computers you can tile.
> Of course there'll also be a lag between delivery of nanocomputers
> and when an UFAI pops out. I merely point out the additional problem.

One of my assumptions is that generic optimizers are difficult
enough that some sort of genetic algorithm will be required to
produce the first one. I realize we differ on this, since you
believe you have a solution that doesn't require GA.

--
Randall Randall <randall@randallsquared.com>
"And no practical definition of freedom would be complete
  without the freedom to take the consequences. Indeed, it
  is the freedom upon which all the others are based."
  - Terry Pratchett, _Going Postal_


This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:46 MST