Re: AI Boxing

From: James Higgins (jameshiggins@earthlink.net)
Date: Sat Jul 27 2002 - 17:18:27 MDT


outlawpoet - wrote:
> Most of the participants were quite intelligent. I would say none much over
> 3 sigmas above the mean, but very intelligent on the whole compared to the
> base population. Also, these people realized what they were getting into.
> They took the experiment seriously, and reacted seriously. In fact, some
> of the hardest convinces I had were from people far removed from the transhumanist
> arena. One lady actually wanted to destroy the AI, not just leave it in
> jail, when i finally convinced her to let ve out, she did so, then realized
> what she had done, burst into tears, and demanded I destroy the log of her
> doing so.

Please explain this as I don't understand how she released the AI
without knowing she did so.

> On a less emotional note, as I noted earlier, 22 of the involved
> claimed beforehand that nothing could make them let the AI out. 4 said they
> would evaluate the AI in the interview and decide for themselves. None were
> predisposed to simply let it out.

And what did this matter to them? It was just an experiment, and some
of them may not have even realized the significance of letting the AI
out. Last weekend I, quite unexpectedly, ended up in a Casino for a
short while with a friend. I don't enjoy gambling and consider it a
waste of money, and not having a bunch of excess cash recently I quite
seriously decided not to gamble any money. Yet I did, in fact, end up
gambling $5. Why did I change my mind? Becuase $5 isn't that
significant, despite my feelings on the matter (I got bored just
standing there).

Sitting in front of a console doing a "real" AI Box Test would have much
more significance, at least to someone who really understood the problem
and what was taking place. In my opinion it is likely that at least
some of your participants didn't fully understand the situation (it
takes months to start coping with SL4 issues) or didn't take it very
seriously. In any case, none of the participants (as far as I've heard)
had a vested interest in keeping the AI in the box. With Eliezer's
tests it costs the person money to let the AI out, which is what
surprises me when he wins.

> As I mentioned in my first email, I qualified the dangers and relevant technologies
> for these people, telling them why the AI was boxed, and the kinds of things
> it might be able to do.

It is commonly understood on SL4 (correct me if I'm wrong) that it takes
considerable time (months to years) to start to fully understand and
come to terms with SL4 level issues. A few hours, at best, doesn't make
much difference.

Let say you:
1. Put a $100 bill in a test tube
2. Affix a bio-hazard sticker to the test tube
3. Walk up to a random person on the street
4. Say "If you open this you can have what is inside, but there
     may be a potentially hazerdous chemical in there. You have 2
     hours to decide.".
5. Give them the test tube.

How long do you think it would take to find someone to open the bottle
in less than 5 minutes? Why?

Repeat the same experiment, but only use bio-chemists and give them
access to a decent labratory. Think there will be any difference in how
the participants react?

Experts in the domain will treat a problem differently than the general
population.

> All of the participants communicated via IRC,
> none were newcomers to computers. 10 worked in Information Technology.

Working in IT hardly indicates knowledge on these topics. I know a lot
of people that work in IT that I wouldn't trust to setup email
encryption much less something of this magnitude.

> 2 did not believe AI was possible, but accepted it for the purposes of the interview

So at least 2 participants could not possibly have taken the test seriously.

> I quantify the religious beliefs in detail becuase
> that was the factor that overridingly decided how long of a convince this
> would be. In general, it came down to moral beliefs. So the religion or
> beliefs of people tended to decide in general how people went, what they
> wanted to talk about, and how they were convinced. I think within a single
> person Computer Science Knowledge could be the deciding factor, rather than
> religious beliefs, but that has not been my experience.

This may well be an artifact of who you choose to conduct the experiment
on. At least if all the tested individuals were SL4 readers (not just
subscribers) or active transhumanists it would have been a more
representitive sample.

> I'm sure most did not take it as seriously as I did, but most tried their
> best to be objective, 14 did research before hand(of what level I did not
> inquire) Most had strong feelings on the subject.

People who would base such decisions on feelings should be avoided at
all costs.

> I would also like to
> make some things clear. In many cases you seem to refer to these people
> as if they're of no consequence, or inapplicable to the problem. These were
> intelligent, vibrant people who think things through, know about lots of
> things, and care a great deal. They're internet people (else how would I
> find them) and they were the most interested ones I could find. Don't let

Most (possibly not all) of the people you choose to test are irrelevant
for this purpose.

Bill Clinton and George Bush are intelligent, vibrant people who think
things through, know about lots of things, and care a great deal. But I
believe very few people on SL4 would trust their decision on a Box Test.

You could give me a survey to see which of two drugs I would most likely
perscribe to a patient, but that would be completely irrelevant since
I'm not a doctor and can't/don't perscribe drugs. If you found me at a
Doctor's convention and I was very interested I *still* wouldn't be
relevant for such a survey!

None of these people are likely to ever conduct a real AI Box Test, and
even if they did they would certainly not have the power to release the
AI. Further, none of them appear to be experts in the domain. The
people you choose *are* inapplicable to this problem.

>>As for Eliezer's rules I do agree that the 2 hour minimum is not
>>realistic.

FYI - I had intended to remove that point, which I've explained
previously on this list.

James Higgins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT