Re: Basement Education

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Jan 29 2001 - 11:27:32 MST


Samantha Atkins wrote:
>
> > Every minute that I ask an AI to deliberately
> > delay takeoff puts another hundred deaths on *my* *personal*
> > responsibility as a Friendship programmer.
>
> This is not balanced thinking. You are not personally responsible for
> all the misery of the world. That you think you have a fix for a large
> part of it, potentially, does not mean that delaying that fix for
> safeties sake makes you responsible personally for what it may (or may
> not) have fixed.

The key word in that paragraph is "potentially". See below.

> > In introducing an artificial
> > delay, I would be gambling with human lives - gambling that the
> > probability of error is great enough to warrant deliberate slowness,
> > gambling on the possibility that the AI wouldn't just zip off to
> > superintelligence and Friendliness. With six billion lives on the line, a
> > little delay may be justified, but it has to be the absolute minimum
> > delay. Unless major problems turn up, a one-week delay would be entering
> > Hitler/Stalin territory.
>
> No. It has to be enough delay to be as certain as possible that it will
> not eat the 6 billion people for lunch. In the face of that as even a
> remote possibility there is no way it is sane to speak of being a Hitler
> if you delay one week. Please recalibrate on this.

If I take a vacation to decompress, *today*, I don't feel guilty; that
comes under the classification of sane self-management. Doing a one-week
delay *after* the AI reaches the point of hard takeoff... I guess my mind
just processes it differently. It's like the difference between saying
that "ExI is a more effective charity than CARE", and actually looting
CARE's bank account. Logically, giving eight dollars of your money to ExI
instead of CARE should have the same consequences as stealing eight
dollars from CARE instead of giving it to ExI... but, morally, that's not
how it works.

Before the AI reaches hard takeoff, it's your time that you're investing
in the AI, to the benefit of everyone in the world perhaps, but yours to
invest in whatever payoff-maximizing strategy seems best. After the AI
reaches the potential for hard takeoff, it's *their* time - and lives -
that you're stealing.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT