RE: The AIbox - raising the stakes

From: Mike (
Date: Tue Jun 29 2004 - 20:17:37 MDT

> -----Original Message-----
> From: [] On Behalf
> Of Norm Wilson
> Sent: Tuesday, June 29, 2004 7:46 AM
> To:
> Subject: Re: The AIbox
> Eliezer Yudkowsky wrote:
> > These usually end up going on for a lot longer than 2
> > hours, and it takes at least an entire day's worth of
> > energy. Unless a really hot prospect comes along (Ray
> > Kurzweil, Marvin Minsky, etc.) I'm probably out of the
> > business for now. Just don't have the time and energy
> > for it.
> Given the amount of time and energy Eliezer has to expend on
> these experiments, how about we sweeten the pot to make it
> more worthwhile? If the Gatekeeper lets Eliezer out of the
> box, I'm willing to match Mike's $100 contribution to SIAI
> (and hopefully others will offer to match it as well), under
> the following additional conditions:
> 1. There's an open discussion prior to the experiment in
> which we explore various arguments for and against letting
> the AI out of the box. It seems reasonable that a group like
> SIAI would have such a debate before sending the Gatekeeper
> off to watch the box. (I believe this has been suggested before)
> 2. As a group, we decide who the Gatekeeper will be.
> Norm Wilson

1. That's an interesting new proposition, but I don't need outside
funding. I can go higher on my own to make it worth his while, and I'll
still give 10 to 1 odds. Eliezer can name the amount that he wants to
wager, up to $250, and I'll cover it 10 to 1, up to $2500. And he can
hold the whole purse until it's concluded and one of us takes it all. I
originally proposed this in the spirit that it started, as more or less
a gentleman's wager, but if it helps to back up my assertion with
serious cash, I'm willing to do so.
2. The limit is preset at 2 hours, only changeable by mutual consent.
If Eliezer doesn't want it to go longer than 2 hours, then nothing is
forcing him to agree to extend it. Personally I don't want to extend it
"for a lot longer than 2 hours", and don't expect to agree to such an
extension either, although that's not part of the bet. I actually
expect that Eliezer will concede before the 2 hours is over.
3. There is no way I can be convinced to let the AI out. I already
have sufficient reasons (and the bet has nothing to do with it). I have
the proof written down, and I'll show it privately to whoever Eliezer
thinks would be able to verify that it's a reasonable approach. I would
ask that this person only announce their opinion as "reasonable" or "not
reasonable", with no further feedback to Eliezer, myself, or anyone
else. The wager is not based on their opinion but on the actual 2-hour
test, if Eliezer chooses to participate.
4. I fully understand if Eliezer chooses not to go for it, not only
because of his work, but because he has already proven his point twice
before and he has little reason to believe that this time will be any
different. It's entirely his choice. However, I don't wish to
participate with a substitute, because it would prove nothing to win
that contest. Unless Eliezer agrees in advance to participate if I can
beat the substitute.
5. If Eliezer still declines to participate after reading this offer,
then all bets are off, and I'll go ahead and post my proof, and we can
all discuss it openly anyway. But since Eliezer keeps his escape
techniques secret, the rest of us will never know for sure whether he
could have escaped the cage or not.

Mike Williams

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT