Re: Suggested AI-Box protocol

From: James Higgins (jameshiggins@earthlink.net)
Date: Tue Jul 16 2002 - 01:55:08 MDT


Eliezer S. Yudkowsky wrote:
> Based on my two experiences so far, I've created a suggested protocol
> for future AI-Box tests. This should also answer the questions Higgins
> asked earlier.
>
> http://sysopmind.com/essays/aibox.html

Yes, very good. Understanding the rules around such a thing is very
important. One little detail you don't cover, though. The decision
made by the Gatekeeper - is it the "final" answer? In other words, are
the Gatekeeper's only choices: 1) let the AI out, or 2) never let the
AI out? If so I don't consider this an interesting test since most
people on this list eventually want an AI they could let out. Given the
question being tested, I think the Gatekeeper should be able to say "I'm
  not letting you out" without that ruling out the possibility of a later
release. The question being tested is that the AI could convince the
human to let it out, not if an AI should ever be let out or not. Correct?

FYI - Given the suggested protocol (plus my above criteria) I'm 100%
certain I could not be convinced, by any living human, to let them (in
the role of the AI) out of the box. You're welcome to try...

James Higgins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT