Re: AI-Box Experiment 2: Yudkowsky and McFadzean

From: Samantha Atkins (samantha@objectent.com)
Date: Thu Jul 04 2002 - 15:50:00 MDT


James Higgins wrote:
> In regard to Eliezer's question, "Do you believe that a transhuman AI
> couldn't persuade you to let it out?" Specifically I'm assuming these
> conditions are in place:
>
> 1. The experiment assumes that the transhuman AI & the programmer are
> communicating via a chat / TTY connection
>
> 2. The transhuman AI can not influence the programmer outside of this
> communications channel
>
> 3. Safety precautions are in effect (AI does not have control over the
> speed at which characters are displayed, etc)
>
> Basically, the above should rule out foul-play by the AI. After which
> point it simply has to convince the programmer that its release is in
> his/her own interest. Something that should be difficult to convince
> most AI programmers of (I would hope) and impossible on others.
>

Why would it be impossible for any human with a set of interests
and agendas to be convinced by a potentially super-powerful
entity that his/her interests would be most furthered by letting
the AI out of the box? Given enough time to suss how the
human's psychology and core values I would be surprised if many
humans could not be convinced.

> My other questions (just for reference): what is the goal of this chat
> session? Is the programmer just bored and decided to kill some time
> talking to his creation? Is this an interview to consider possible
> release and/or a less restrictive prison?
>

Presumably the AI would be talked to from time to time as part
of its training and testing. If it engages in sufficient
non-task communication and has a reasonable knowledge of human
psychology, it could probably probe for weak points to attempt
to persuade the interviewer.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT