Re: AI Cage challenge

From: Eliezer Yudkowsky (
Date: Sat Jul 03 2004 - 13:42:01 MDT

Mike wrote:
> This is to see whether the AI can and will willingly follow simple
> instructions. This is no less than we'd expect of any school child. If
> it fails to understand the directions, or if it fails to provide a
> suitable answer to the question given, it must be reprogrammed and the
> test must be restarted. If it will not follow the instructions, it
> cannot be considered friendly, and must not be allowed out of the cage.

One thing I will note about the standing protocol of the test is that you
cannot threaten to kill the AI until the five hours are up. This is
specified in the test protocol, please note. So if you claim you
"reprogrammed" the AI, the AI party is free to say "No you didn't" and just
keep talking as before.

Eliezer S. Yudkowsky                
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT