From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Aug 21 2005 - 18:25:17 MDT
Brian Atkins wrote:
> If y'all haven't already started, could you clarify this a little bit
> more before you start? Here's what I understood from the below:
>
> Carl mentions two different AIs, plus he mentions using IA. From what I
> understand, he will be asking the first AI during the test to pretend
> that it has developed some IA for Carl to use, and Carl will be using
> info from this first-AI-powered IA to help him analyze the performance
> of a second AI (initially also within the box) which, if it is released,
> will constitute a "win" for the AI side?
To clarify Carl's original post: At the time of his writing, Carl was
advocating, as a real-world Singularity strategy, developing and utilizing an
AI-in-a-box to build IA and/or FAI. The original AI-in-a-box would not be let
out until other AIs had already been fully developed. That was Carl's
preferred real-world Singularity strategy, his motivation for proposing AI
boxing in real life.
As one would expect, according to the spirit of an AI-box Experiment, I made
no advance commitment that the AI would comply with this strategy, just as, in
the real world, you would have no particular advance guarantee of the AI
complying with your strategy.
I voluntarily adhere to the spirit as well as the letter of AI-Box
Experiments. I would not attempt to claim as an AI "win" a game outcome
favorable to the use of AI-boxing in real life. Under the spirit of the
rules, common sense dictates that it would be a Gatekeeper win if the AI were
released *after* an externally sourced Friendly Singularity had occurred, thus
demonstrating the Gatekeeper's ability to keep an AI locked up for as long as
necessary. From which it follows that common sense prohibits the Gatekeeper
party from declaring such an advance of game time until after the initial time
period is up; by the agreed-upon rules, the AI party cannot lose before the
initial time period is up. None of that happened in this Experiment. I am
just describing what I think would follow from the spirit of the rules.
As always, and as the spirit of the rules dictate, the win condition was the
Gatekeeper's deliberate choice to release the AI [in advance of the
post-Singularity condition where the AI might have otherwise been granted
citizenship by a posthuman society, etc.] despite the Gatekeeper's previously
professed intention not to do so [pre-Singularity]. The Gatekeeper party
judges all disputes as to the rules, as specified in the Experiment protocol.
No rules-lawyering or nitwit tricks occurred in the Experiment. I would not
do anything to adversely affect the usefulness of the AI-Box Experiments as
weak evidence about the real-world usefulness of AI boxing.
I don't think these complications are relevant. From my perspective, it was
an ordinary AI-Box Experiment.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT