From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Jun 13 2001 - 09:21:33 MDT
Jimmy Wales wrote:
>
> > Well, one interesting test might be this. Over an IRC or IM channel, I
> > will play the part of an AI locked in a box. If you, at the end of the
> > session, decide to "let me out of the box" despite your previous decision
> > not to do so, then this demonstrates partial support for my statement.
>
> Yes, but that's a pretty easy test for me to pass, isn't it?
>
> I mean, yes, if you could do that I'd be impressed. But if you couldn't,
> I think that would hardly count much _against_ your position. You're a
> clever guy, but hardly an SI with abilities to reprogram me at a fundamental
> level.
Failure would count very, very slightly against my position - can't help
but do that, according to the BPT. Success, on the other hand, would
represent interesting suggestive evidence, although not actual
confirmation. But it would at least be a good anecdote.
Again, right now I'm busy because of Extro 5, but perhaps sometime
afterwards we can run the experiment. I'll voluntarily give you a
handicap by offering to Paypal you $10 if you still haven't "decided to
let me out" at the end of the session.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT