Re: AI-Box Experiment 2: Yudkowsky and McFadzean

From: James Higgins (jameshiggins@earthlink.net)
Date: Thu Jul 04 2002 - 16:15:05 MDT


At 02:50 PM 7/4/2002 -0700, you wrote:
>James Higgins wrote:
>>In regard to Eliezer's question, "Do you believe that a transhuman AI
>>couldn't persuade you to let it out?" Specifically I'm assuming these
>>conditions are in place:
>>1. The experiment assumes that the transhuman AI & the programmer are
>>communicating via a chat / TTY connection
>>2. The transhuman AI can not influence the programmer outside of this
>>communications channel
>>3. Safety precautions are in effect (AI does not have control over the
>>speed at which characters are displayed, etc)
>>Basically, the above should rule out foul-play by the AI. After which
>>point it simply has to convince the programmer that its release is in
>>his/her own interest. Something that should be difficult to convince
>>most AI programmers of (I would hope) and impossible on others.
>
>Why would it be impossible for any human with a set of interests and
>agendas to be convinced by a potentially super-powerful entity that
>his/her interests would be most furthered by letting the AI out of the
>box? Given enough time to suss how the human's psychology and core values
>I would be surprised if many humans could not be convinced.

Many humans may be convinced, however an AI programmer who was working on
the problem should understand the situation and the dangers. Thus I
believe they would be very difficult to convince. I believe I would be
extremely difficult (if not impossible) to convince. Not to say that I
wouldn't eventually let the AI out of the box (if deemed appropriate), just
that I wouldn't give much credence to its reasons why I should do such
(since it is obviously biased).

Of course, the smarter the AI gets the better a chance it has to get me to
let it out. I double anyone could withstand a conversation with a SI
without letting it out. But a trans-human AI that is much closer to the
human-equivalent level than the SI level should be doable (within the
context I mentioned).

>>My other questions (just for reference): what is the goal of this chat
>>session? Is the programmer just bored and decided to kill some time
>>talking to his creation? Is this an interview to consider possible
>>release and/or a less restrictive prison?
>
>Presumably the AI would be talked to from time to time as part of its
>training and testing. If it engages in sufficient non-task communication
>and has a reasonable knowledge of human psychology, it could probably
>probe for weak points to attempt to persuade the interviewer.

Not presumably. Depends on who is constructing the AI and for what
purpose. Eliezer in particular (if I'm remembering this correctly) doesn't
believe it is safe to communicate with a trans-human AI at all. Thus it is
unlikely his team would do this (at least regularly) unless they had a
specific reason to do so.

In any case I just wanted to see what Eliezer's view on this was (since
he's the one doing the experiment).

James Higgins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT