Re: Deliver Us from Evil...?

From: James Higgins (jameshiggins@earthlink.net)
Date: Wed Apr 04 2001 - 23:08:49 MDT


At 12:36 AM 4/5/2001 -0400, Eliezer S. Yudkowsky wrote:
>Samantha Atkins wrote:
> > Or we might find that the Sysop no matter how wise and benevolent and
> > transparent intolerable for the types of creatures we are but be unable
> > to choose differently.
>
>Well, then, I guess the challenge lies in creating a Friendly AI that
>doesn't like intolerability. Your statement also implicitly assumes that
>less frustration exists in a Sysop-free Universe, which doesn't
>necessarily follow; the next most probable alternative to a Sysop might be
>imprisonment within a less benevolent entity. If a nonSysop scenario is
>both desirable and temporally stable, then it still looks to me like
>you'll need a superintelligent, Friendly, Transition Guide as a means of
>getting there without any single human upload taking over until there's a
>stable base population. Friendly AI is still the best tactic in either
>case.

I'm not personally convinced that uploading a human first isn't the correct
answer. I'm rather inclined to trust a person more than an AI to map out
our destiny. I think this boils down to 2 possibilities:

         1) Superintelligence does not end hatred, jealousy, prejudice,
envy, etc.

         or

         2) Superintelligence provides a perspective that makes such things
un-important

If it turns out that 2 is true for any SI, then we don't need a sysop and
it would be safe to upload (nearly) any human into being the 1st SI.

If 1 is the case, I honestly think we're screwed long-term anyway. A sysop
environment will just breed contempt, jealously, etc in the SIs within its
domain. Eventually you end up with either a breakout of the sysop
environment or a mostly unhappy population.

So far I haven't read anything that convinces me that creating a Sysop is
the correct path.

> > We are spinning the barrel and pulling the trigger in a cosmic game of
> > russian roulette. The barrel holds thousands of rounds and only a few
> > chambers are empty.
>
>Where do you get *that* set of Bayesian priors from? Not that it makes
>much of a difference, I suppose; all that counts are the proportional
>qualities of the empty chambers, not how many empty chambers there are.

Hmm, well more like their would be 3 possibilities for any given pull.
         A) Jackpot (everyone's version of heaven wrapped into 1 neat bundle)
         B) No (perceivable) effect
         C) Boom (use your imagination)

Now, the real question is how likely are each of those to happen. I have a
feeling A is on the order of 10% or less. If we are really lucky, C is
also 10% or less. But I'm not feeling very lucky.

> > If we "win" we either are set for all eternity or
> > get the chance to play again some other time. Except that it is the
> > entire world and all of us forever that the gun is pointing at. To do
> > that we have to be very, very damn sure that there is no other way
> > and/or that this (building the SI) is the best odds we have.
>
>One, all we need is the conviction that (a) time is running out and (b)
>the conviction that building the SI is better than any other perceived
>course of action. Certainty is a luxury if you live in a burning house.

Time is running out for what?

How can we say that building the SI is better than any other perceived
course of action? I can actually imagine life on the other side of
creating an SI being rather bleak. Imagine a reality where we know
everything (to a reasonable approximation) and can do anything
(ditto). This would get boring very quickly. Since the major driving
force behind human nature seems to be continual learning, what do you do
when you have learned everything? I think this reality would be more like
my personal hell than anything else.

>Two, I am very, very damn sure.

Very, very damn sure of exactly what?



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT