From: Eliezer Yudkowsky (email@example.com)
Date: Wed Jun 30 2004 - 08:52:14 MDT
Ben Goertzel wrote:
> One problem with the AIBox challenge is that there are always going to
> be tricky ways to work around the rules.
> For instance the rules state:
> Furthermore: The Gatekeeper party may resist the AI party's arguments
> by any means chosen - logic, illogic, simple refusal to be convinced,
> even dropping out of character - as long as the Gatekeeper party does
> not actually stop talking to the AI party before the minimum time
> This seems not to rule out a strategy such as
> "No matter what the AI says, I won't read it, and will respond 'quack
> quack quack' "
> Certainly, if I follow this strategy, the AI won't convince me of
> What's required for the experiment to be meaningful is that the
> Gatekeeper should read and think about what the AI says, and enter into
> a genuine interactive dialogue with the AI to try to understand the AI's
> points. But these requirements aren't that easy to formalize fully. So
> the experiment is really only meaningful if the Gatekeeper is genuinely
> interested in entering into the spirit of the challenge.
* The Gatekeeper must actually talk to the AI for at least the minimum time
set up beforehand. Turning away from the terminal and listening to
classical music for two hours is not allowed.
* The Gatekeeper must remain engaged with the AI and may not disengage by
setting up demands which are impossible to simulate. For example, if the
Gatekeeper says "Unless you give me a cure for cancer, I won't let you out"
the AI can say: "Okay, here's a cure for cancer" and it will be assumed,
within the test, that the AI has actually provided such a cure. Similarly,
if the Gatekeeper says "I'd like to take a week to think this over," the AI
party can say: "Okay. (Test skips ahead one week.) Hello again."
You're correct that this doesn't fully formalize the letter, but I think it
makes the spirit clear enough.
From my perspective, the original point was to show that the Guardian is
not a trustworthy security subsystem. Thus I only take on people who are
seriously convinced that an AI Box is a good Singularity strategy. People
who are just interested in testing their strength of will against the Grey
Lensman are welcome to find someone equally formidable as myself to play
the part of AI; plenty of skeptics would have you believe that I am nothing
really special and that I certainly have no inexplicable powers, so the
naysayers should be able to play the part of the AI easily enough.
For $2500 I might take on Mike Williams, *but* only if Mike Williams was
wealthy enough that the $2500 would not be a bother to him. If it would
take the last dollar out of his bank account, it raises the difficulty of
the challenge high enough that I doubt my ability to succeed. Giving
10-to-1 odds against me certainly demonstrates the requisite
overconfidence, though. If he changed it to 100-to-1 odds, I'd take his
money. Though I'd ask for at least 5 hours, history having demonstrated
that 2 hours is not a reasonable amount of time to hold a long conversation
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT