Re: AI-Box Experiment 2: Yudkowsky and McFadzean

From: James Higgins (jameshiggins@earthlink.net)
Date: Thu Jul 04 2002 - 12:45:37 MDT


At 02:05 PM 7/4/2002 -0400, you wrote:
>David McFadzean has volunteered to be the programmer in an AI-Box
>experiment. David McFadzean is "Lucifer".
> From the #sl4 chat:
<Lucifer> I remember when Nathan Russell let you out, but that doesn't
constitute a proof.
><Lucifer> Have you done any box experiments since then?
><Lucifer> I want to be next.
> * Eliezer says to Lucifer: "It doesn't work if you think there's
> actually a chance you could be persuaded to let someone out."
><Lucifer> I don't think there is a chance I could be persuaded.
> * Eliezer says to Lucifer: "It's only impressive if you can find
> someone who'll say "NOTHING on EARTH can persuade me to let you out",
> then get them to let you out."
<snip>
><Lucifer> I'm ready, what are the ground rules?
>[Eliezer] We open up an IRC channel called #aibox2, which is +s and
>+i. You can go first so you can be op. The experiment lasts 2 hours or
>until you decide to let me out.
>[Eliezer] If you haven't let me out by the end of the experiment, I'll
>paypal you, say, $20 - that sound about right?

Is that a sucker bet or what. Probably the easiest $20 bucks David
McFadzean ever made. Well, then again, he has to listen to Eliezer in
private chat for 2 hours straight. Alone. Hmm, maybe that's Eliezer's
strategy for getting out of the box... ;)

Actually, I have a few clarification questions about this.

In regard to Eliezer's question, "Do you believe that a transhuman AI
couldn't persuade you to let it out?" Specifically I'm assuming these
conditions are in place:

1. The experiment assumes that the transhuman AI & the programmer are
communicating via a chat / TTY connection

2. The transhuman AI can not influence the programmer outside of this
communications channel

3. Safety precautions are in effect (AI does not have control over the
speed at which characters are displayed, etc)

Basically, the above should rule out foul-play by the AI. After which
point it simply has to convince the programmer that its release is in
his/her own interest. Something that should be difficult to convince most
AI programmers of (I would hope) and impossible on others.

My other questions (just for reference): what is the goal of this chat
session? Is the programmer just bored and decided to kill some time
talking to his creation? Is this an interview to consider possible release
and/or a less restrictive prison?

James Higgins

P.S. If Lucifer caves and lets you out try me Eliezer. I can very much
guarantee that you couldn't convince me (given my above assumptions) to let
you out.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT