Re: AI Boxing

From: outlawpoet - (outlawpoet@stealth.hell.com)
Date: Sat Jul 27 2002 - 18:38:52 MDT


some clarifications.

--- James Higgins <jameshiggins@earthlink.net> wrote:
>Please explain this as I don't understand how she released the AI
>without knowing she did so.

she did know she was letting the AI out. However, once she had done so, She realized that she'd been manipulated into contravening her earlier intention to keep the AI in the box, regardless of what it said.

>And what did this matter to them? It was just an experiment, and some
<snip>
>had a vested interest in keeping the AI in the box. With Eliezer's
>tests it costs the person money to let the AI out, which is what
>surprises me when he wins.

Well, to a certain extent, I can only find people with vested interests up to a certain point. I didn't want to take people's money, so I couldn't have them invest something. Besides, I don't see the AI box as simply 'convince me to let you out, you can't do it" I see it as, "can a person accurately identify Friendly and unFriendly intelligences via this medium" And I design the tests accordingly. People can in fact, determine that the AI is friendly, and let it out, and 'win'. I'm not sure how exactly that would work.. but according to the theory of my experiments it's *possible*. I'm not certain as to if Eliezer's experiments have a similar possible outcome.

<snip>
>Experts in the domain will treat a problem differently than the general
>population.

Of course they would. But this is a test of that particular media, and as the intelligence is uncertain, and shifting, I was simply trying to collect more data points than I had before. As it is nearly impossible to duplicate the exact social interaction, I simply was trying to get a handle on what kinds of patterns arise when 'something' is negotiating for it's freedom. And I believe that certain kinds of patterns arose which are significant can be generalized regardless of the specialized knowledge that each party may have on each other.

>> All of the participants communicated via IRC,
>> none were newcomers to computers. 10 worked in Information Technology.
>
>Working in IT hardly indicates knowledge on these topics. I know a lot
>of people that work in IT that I wouldn't trust to setup email
>encryption much less something of this magnitude.

As i've mentioned before, this is hardly a technical issue. All they need are sufficient understanding of the problems involved, unless you believe that the AI Researcher is likely to be able to predict personality traits within an AI.

>
>> 2 did not believe AI was possible, but accepted it for the purposes of the interview
>
>So at least 2 participants could not possibly have taken the test seriously.

No, they both took the test very seriously, and were very surprised and upset that most let the AI out. One was the lady who refused to interact, and the other eventually relented and let her AI out. But was quite unhappy about her decision afterwards.

<snip>
>This may well be an artifact of who you choose to conduct the experiment
>on. At least if all the tested individuals were SL4 readers (not just
>subscribers) or active transhumanists it would have been a more
>representitive sample.

That may be so. But representative of what, exactly? Of SL4 subscribers? why should that be more important?

>
>> I'm sure most did not take it as seriously as I did, but most tried their
>> best to be objective, 14 did research before hand(of what level I did not
>> inquire) Most had strong feelings on the subject.
>
>People who would base such decisions on feelings should be avoided at
>all costs.

replace feelings, with opinions, and it's closer to what I actually meant. I apologise for the mistaken implication.

>> I would also like to
>> make some things clear. In many cases you seem to refer to these people
>> as if they're of no consequence, or inapplicable to the problem. These were
>> intelligent, vibrant people who think things through, know about lots of
>> things, and care a great deal. They're internet people (else how would I
>> find them) and they were the most interested ones I could find. Don't let
>
>Most (possibly not all) of the people you choose to test are irrelevant
>for this purpose.
<Snip>

You go on to explain domain competency. This is important. However, within the context of the interview, it is more likely that technical knowledge of AI and related technologies will take a back seat to debating ability and investigative intelligence. Along with a basic dose of stubbornness, the problems inherent in bargaining for your freedom have more to do with rational discourse and insight into interaction than they do with AI, nanotech, and other fancy words.

this may be a mistaken belief, but in my limited experience in this matter, the ones who put up a fight are not those who know, but the intelligent, the careful, and the rational.

>>>As for Eliezer's rules I do agree that the 2 hour minimum is not
>>>realistic.
>
>FYI - I had intended to remove that point, which I've explained
>previously on this list.

ah so.

sorry.

>
>James Higgins

Justin Corwin
outlawpoet@hell.com
"This is the saddest day, This is the greatest day I've ever known.."
                               ~Smashin Pumpkins



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT