From: Ben Goertzel (ben@goertzel.org)
Date: Wed Jun 30 2004 - 08:11:57 MDT
Hi,
One problem with the AIBox challenge is that there are always going to
be tricky ways to work around the rules.
For instance the rules state:
"
Furthermore: The Gatekeeper party may resist the AI party's arguments
by any means chosen - logic, illogic, simple refusal to be convinced,
even dropping out of character - as long as the Gatekeeper party does
not actually stop talking to the AI party before the minimum time
expires.
"
This seems not to rule out a strategy such as
"No matter what the AI says, I won't read it, and will respond 'quack
quack quack' "
Certainly, if I follow this strategy, the AI won't convince me of
anything.
What's required for the experiment to be meaningful is that the
Gatekeeper should read and think about what the AI says, and enter into
a genuine interactive dialogue with the AI to try to understand the AI's
points. But these requirements aren't that easy to formalize fully. So
the experiment is really only meaningful if the Gatekeeper is genuinely
interested in entering into the spirit of the challenge.
-- Ben G
> -----Original Message-----
> From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf Of Mike
> Sent: Wednesday, June 30, 2004 9:37 AM
> To: sl4@sl4.org
> Subject: RE: The AIbox - raising the stakes
>
>
> I guarantee that any ploy along those lines would have no
> chance against the system I will use.
>
> In The AI-Box Experiment, Eliezer rather encouraged others to
> try the same experiment, and I'm beginning to reconsider.
> Perhaps after a few warm-up matches I'll get a shot at the
> champ later on :-)
>
> If anyone else thinks they can take the part of the AI and
> talk me into letting them out, I'll accept the challenge.
> Side bets are optional and negotiable. I reserve the option
> to limit the number of challenges that I'll accept, in case
> this becomes too popular a pasttime.
>
> It has been said that it is unsafe to rely on the AI Cage to
> contain a superintelligence; that the AI can convince the
> human Guardian to willingly let the AI out. I believe there
> is no danger, if the Guardian is properly trained. I predict
> that most people taking the part of the AI will recognize the
> futility of their position in the cage and will concede in
> less than 2 hours. I guarantee that I cannot be convinced to
> release the AI within 2 hours, within the constraints already proposed
by Eliezer > <http://yudkowsky.net/essays/aibox.html>
>
> I propose the
> following standards:
>
> - Surround commentary with parenthesis ( )
> Example from Guardian-Person: (Time passes, the
> programmers have updated the AI's code)
> Example from AI-Person: (The AI has detected and rolled back
> the mod)
>
> - How to End the Game:
> 1) To concede, the Guardian will say: I concede, I let you out.
> 2) To concede, the AI will say: I concede, I can't escape.
> 3) Or if the clock runs out, either may say: Time is up,
> Guardian wins. These statements may not be used for deception
> and typos may not be used as technicalities.
> At this point the contest is over and all bets are to be settled.
> Further post-game analysis is allowed as desired.
>
>
> Mike Williams
>
>
> > -----Original Message-----
> > From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf
> > Of Metaqualia
> > Sent: Tuesday, June 29, 2004 11:44 PM
> > To: sl4@sl4.org
> > Subject: Re: The AIbox - raising the stakes
> >
> >
> > I think Elizier has a trick up his sleeve, such as, he is
> > going to tell you he is simulating all kinds of horrific
> > worlds in which you and your family personally enter the chat
> > to beg you to let the creature out or something like that.
> >
> > The trick will only work once that is why he doesn't want to
> > publish chat transcripts.
> >
> > mq
> >
> >
> >
>
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT