Re: Transcript. please? (Re: AI-Box Experiment 3)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Aug 22 2005 - 17:21:38 MDT


Russell Wallace wrote:
> On 8/22/05, Michael Wilson <mwdestinystar@yahoo.co.uk> wrote:
>
>>If you can't construct any theory for how you might fail it,
>>/and/ you truly believe that anything you can't construct a
>>theory for can't happen (or at the very least is incredibly
>>unlikely), then why don't you just take the challenge and win
>>$25 while satisfiying your curiosity?
>
> I've said I believe myself to be more resistant than the other
> contestants to the techniques I suspect Eliezer to be using, albeit
> for different reasons, and I am curious... so I'm going to take up
> that challenge. I hereby offer to participate in an AI box experiment
> under terms similar to the ones used in the latest run.

Russell, you previously wrote:

> Whether unfriendly superintelligent AI in a box is safe depends on
> your assumptions; but I claim that there are _no_ plausible
> assumptions under which it would be _both safe and useful_.

I agree.

Are we supposed to simulate a Friendly AI in a box? Why wouldn't you just let
it out immediately?

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT