Re: AI Boxing

From: outlawpoet - (outlawpoet@stealth.hell.com)
Date: Sat Jul 27 2002 - 14:30:04 MDT


THanks for the reply James, I reply to a couple of the points below..

--- James Higgins <jameshiggins@earthlink.net> wrote:
>Justin Corwin wrote:
>> On a related note, I believe this experiment can be generalized to most

>> humans, and should be seen as applicable even to highly intelligent and

>> prepared individuals, as some of these people were, and I think this

>> illustrates some universal principles.

I would like to clarify my belief here. I did not mean to imply that the
pattern of conversation, or the method of convincing could be generalized
to most humans. The problem with humans as voluntary jailers is that most
human have flaws or 'hooks' in their thinking. Even thinking that they've
mulled over and formalized. An AI(or in this case, me roleplaying an AI)
can seek out and grab those hooks in whatever kind of conversaiton you have
with it, and exploit them. The most convincing kinds of hooks can be used
against each person, such as their logical flaws in religion, or philosophy,
or even AI perspective. It is this process that I believe can be generalized
from this series of experiments. Humans are too bad at thinking to accurately
defend against a determined and rational opponnent, particularly an unFriendly,
lying, transhuman one.

And another point I think got lost here, but that
the original aim was not to show how cool I am, or how many convinces I
can bag, but that the AI Box theory is not sound. Humans cannot determine
accurately how Friendly or unFriendly an AI is. Humans can, as it was demonstrated
in my experiment, leave an AI in jail just by not listening, or not interacting,
but that defeats the whole stated purpose of the AI Box, which is a testing
ground for the AI to be released.

>
>I agree that this experiment could be generalized to most humans, but I

>do not believe it is applicable to highly intelligent & prepared
>individuals.
You said "only 2 were transhumanists" and "1 could be
>considered a nascent
Singularitarian". Therefor your sample may be
>indicitive of the general
population but not of transhumanists,
>singularitians or AI researchers
(the most important).

Most of the participants were quite intelligent. I would say none much over
3 sigmas above the mean, but very intelligent on the whole compared to the
base population. Also, these people realized what they were getting into.
They took the experiment seriously, and reacted seriously. In fact, some
of the hardest convinces I had were from people far removed from the transhumanist
arena. One lady actually wanted to destroy the AI, not just leave it in
jail, when i finally convinced her to let ve out, she did so, then realized
what she had done, burst into tears, and demanded I destroy the log of her
doing so.

On a less emotional note, as I noted earlier, 22 of the involved
claimed beforehand that nothing could make them let the AI out. 4 said they
would evaluate the AI in the interview and decide for themselves. None were
predisposed to simply let it out.

>
>The vast majority of the population is not equiped for such a challenge.

> To make an intelligent decision in this one would, at least, need to

>understand the concepts of NanoTechnology & The Singularity. If they

>don't truly understand, or believe in, the potential threat they may
>arbitrarily
release the AI. Even if NanoTech & The Singularity were
>impossible, a
transhuman AI could still wreak havoc on the Internet.
>You don't even
qualify the level of Computer Science knowledge of most
>participants (prefering
to specify their religous beliefs in detail,
>which (IMHO) is far less
important).

As I mentioned in my first email, I qualified the dangers and relevant technologies
for these people, telling them why the AI was boxed, and the kinds of things
it might be able to do.

All of the participants communicated via IRC,
none were newcomers to computers. 10 worked in Information Technology. 2
did not believe AI was possible, but accepted it for the purposes of the
interview. All had some time to prepare for the interview, (in an interesting
twist one was doing research on AI, came across Yudkowsky's site, and noticed
the AI-boxing protocol)

I quantify the religious beliefs in detail becuase
that was the factor that overridingly decided how long of a convince this
would be. In general, it came down to moral beliefs. So the religion or
beliefs of people tended to decide in general how people went, what they
wanted to talk about, and how they were convinced. I think within a single
person Computer Science Knowledge could be the deciding factor, rather than
religious beliefs, but that has not been my experience.

>
>You also don't specify the conviction of the participants. Were they

>just playing along because they thought it was fun or had some excess

>time? Did they take this seriously?

I'm sure most did not take it as seriously as I did, but most tried their
best to be objective, 14 did research before hand(of what level I did not
inquire) Most had strong feelings on the subject.

I would also like to
make some things clear. In many cases you seem to refer to these people
as if they're of no consequence, or inapplicable to the problem. These were
intelligent, vibrant people who think things through, know about lots of
things, and care a great deal. They're internet people (else how would I
find them) and they were the most interested ones I could find. Don't let
my quantifying the differences fool you, they were on the whole, more alike
than unalike. and the picture is one of above average intelligence, interest
in the issues at hand, and computer savvy, if not proffessional. But as
the saying goes, sheep have no trouble telling each other apart.
 
And when I say only 2 were transhumanist, I mean REAL transhumanists,
with the desire to transcend themselves, upwing beliefs, rational objectors.
Hell, most of the people I interviewed would be more than comfortable on
the Extropians list, or Transhumantech. They're just not yet ready to go
all the way. (or at least as far as I have, i hesitate to call that all
the way, even if ELiezer doesn't see anything past sl4.)

>
>As for Eliezer's rules I do agree that the 2 hour minimum is not
>realistic.

I saw Eliezer's rule to minimum at 2 hours as a hack to try and approximate
transhuman arguing ability, but I felt the experiment would be more meaningful
without it. I suppose it might have changed my mind if more thatn one person
had just 'hung up' on me.
>
>James Higgins

Interesting stuff james, Thanks for the reply, again.

Justin Corwin
outlawpoet@hell.com

"One more time....Celebrate...One more time."
                              ~Daft Punk



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT