From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Jul 06 2002 - 05:23:20 MDT
Or to put it another way, James Higgins: Have you ever considered the
problem from the AI's perspective? Or are you just considering it from
yours? Have you sat down and really thought about how *you* would
handle the problem of persuading someone to let you out? How much time
have you spent thinking about it? Do you think you could win an AI-Box
experiment, or would you at least be willing to try? If you wanted
someone to guard an AI, would you choose someone who said "I can't
imagine how any AI could convince me to let it out", or would you choose
someone who had previously won, playing the AI's role in an AI-Box
experiment? Not that I think it would help much either way, against a
transhuman AI, but I'm asking what you would do.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT