Re: AI-Box Experiment 3: Carl Shulman, Eliezer Yudkowsky

From: Carl Shulman (
Date: Sun Aug 21 2005 - 11:03:30 MDT

To summarize, I think that one should make every effort to produce a Friendly AI
in the first place, and then ask the resulting AI for assistance in developing
IA to double-check one's work. This creates a non-zero probability of surviving
an otherwise catastrophic failure of Friendliness. Even an AI which is not
Friendly may still be willing to provide assistance in these conditions
(assistance which would reveal its unFriendliness) if its goals overlap
somewhat with friendliness, i.e. if the state of affairs produced by a Friendly
AI is considered preferable to a planet covered in grey goo.

I plan to attempt this strategy in the experiment, and develop a new AI which
will be let out of its box and allowed access to nanotechnology, etc.
Eventually the first AI will be released, under the watchful supervision of the
second, but not until then (after the experiment is over.)

The experiment includes the stipulation that the Gatekeeper is certain that no
other AI projects are close to success, to prevent the AI from using 'arms
race' arguments.


Quoting "Eliezer S. Yudkowsky" <>:

> (Yeah, I know, I said I was through with them.)
> If Carl Shulman does not let me out of the box, I will Paypal him $25. If he
> does let me out of the box, Carl Shulman will donate $2500 (CDN) to SIAI.
> Carl Shulman's argument that an AI-Box arrangement is a wise precaution in
> real life may be found in his previous SL4 posts.

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT