Re: Transcript. please? (Re: AI-Box Experiment 3)

From: Russell Wallace (
Date: Mon Aug 22 2005 - 17:40:29 MDT

On 8/23/05, Eliezer S. Yudkowsky <> wrote:
> Are we supposed to simulate a Friendly AI in a box? Why wouldn't you just let
> it out immediately?

My view is that:

1) You can't create AI in a box in the first place.
2) There's no point in doing so, because you won't get any use out of
it while it's in the box; therefore you shouldn't set out to create AI
unless you're confident enough in your procedures that you will, as
you say, be willing to let it out immediately. (Previously suggested
actions like having the AI create IA mods for you - and going ahead
and using them - in my opinion effectively count as letting it out.)

For the scenario, we can postulate I'm wrong about #1. For #2... let's
take the following scenario: a group has managed to create AI of
unknown Friendliness in a box, disagreeing with me about the
usefulness thereof, and are going to have someone sit down and talk to
it over a text link for a couple of hours to make the decision whether
to let it out. Somewhat exasperated by this, I volunteer to be that
someone (my motive for volunteering being that, as I said before, I
believe myself to be more resistant than most to persuasion).

- Russell

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT