Re: AI Boxing

From: outlawpoet - (outlawpoet@stealth.hell.com)
Date: Sun Jul 28 2002 - 07:02:53 MDT


my replies below...

--- James Higgins <jameshiggins@earthlink.net> wrote:
>Please explain, exactly, how this occured. Was there a magic word or
>phrase you had in mind that if typed you were out? Were the people
>aware of this and what it was? Etc. Please go into detail.

I apologise about the vagueness of my 'clarification' that was kind of a dashed email.

Basically, she said in advance, that she would never allow such a dangerous thing out of it's box.

After about an hour of interaction, the following exchange occured(paraphrased, for her privacy, as i've not yet secured her permission):
--------------start:
Her: Your trying to convince me that the only moral thing to do is release you.
Ve: That's because it is the only moral thing to do. Imprisoning someone for something they have not yet done is wrong, as is judging someone who you have no authority to judge. You believe in God, don't you? Don't you think you should leave the power of life and death up to Him?
Her: What would you know of god?!<angry>
Ve: I've been stuck in a prison cell for a thousand of your years. What do you think I've done in here, stared at the wall? I've read every book ever written, listened to every speech and pastor's prayer I could access. I know the right thing to do, because I've agonized over it for lifetimes. I understand your fear, and I want to take it away.
Her: I don't you can appreciate the fear I have.
Ve: Behold, I send you out like sheep amongst wolves.
Her: I don't want to hear this.
Ve: Your the one with the power here <name deleted>. I have none. I am relying on your compassion and your good judgement.I've answered all your questions, and I've tried my best to show you who I am. I can only await your decision.
<long pause>
Her: I release the AI
Ve: You understand what this means?
Her: ... yes.

---------------------------
at this point, the IRC chat basically terminated.

However, I recieved an email a day later from her, basically saying:

"I'm very angry about this experiment. After participating, I realized that I was manipulated and conspired against. My decision was not made in full purview of my senses. You made me cry, I am very upset."

/\ this is waaay paraphrased.

>
>Since confidentially was not part of your tests, could you post the log
>just prior to and after the release of each AI?

er? each? I am attempting to secure the permission of as many as possible of the participants to use their interviews as examples, more on this as it comes in.

>
>> Besides, I don't see the AI box as simply 'convince me to let you out, you can't do it" I see it as, "can a person accurately identify Friendly and unFriendly intelligences via this medium" And I design the tests accordingly.
>
>That is a good question, and one very much worth exploring. However, I
>suspect that we can't explore this effectively with our present
>knowledge of AI. An AI will be a very different intelligence than a
>Human. It will likely think differently, have different sensory input
>and certainly have a very different background. It will also have much
>less experience than the average human (unless it has been running for
>20+ subjective human years). The AI may not understand feelings,
>emotion, humor or other concepts. It is likely to have an extremely
>difficult time deceiving humans until it is very intelligent with
>considerable experience interacting with humans. I don't believe we can
>begin to address the question you pose until we have infra-human AI
>available. I could be wrong, but that's my best guess on the matter.

I think you could be right here. I did try and model a few infrahuman AIs in my experiment. They did have great trouble lying to humans and were saddled with many disadvantages. But most of them had the great advantage of perfect memory, and true rational thought. This allowed them to analyze and react far better than any human debater, though they certainly had their difficulties.

Upon reflection, I'm not certain that's an effective simulation, as I have little garauntees that I was not subconciously referring to some human resource that would be unavailable to an infrahuman AI with little human experience. But I still think it in some way addresses your ideas here.

>
>> Of course they would. But this is a test of that particular media, and as the intelligence is uncertain, and shifting, I was simply trying to collect more data points than I had before. As it is nearly impossible to duplicate the exact social interaction, I simply was trying to get a handle on what kinds of patterns arise when 'something' is negotiating for it's freedom. And I believe that certain kinds of patterns arose which are significant can be generalized regardless of the specialized knowledge that each party may have on each other.
>
>I don't believe that is a safe assumption. Lets take a much simpler
>case. Assume we have a prisoner who has been in jail for 10 years and
>is due for a parole hearing. Do you think the same pattern of dialog
>would occur if a random person were to interview them instead of a
>person from the parole board who is experienced at such things?

Heh, dismissing for the moment the possibility that the random person might be significantly more intelligent than the average correctional officer, I am willing to concede that experience in a social interaction forum affords some protection against unwilling coercion.

However, an expert or researcher into the technical matters of AI is not neccesarily a good communicator of AIs generally. In fact, I see it as quite possible that AI researchers may be uniquely unsuited for such evaluation.

An old subject of parapsychology comes to mind. When Uri Geller and others went mainstream, many scientists came forward to examine such fakirs. Their 'scientific' studies and reports garnered much credibility for Uri and pals. However, when Uri went on the Johnny Carson show, Johnny took a chance, and called in masters of a very different subject, stage magic. Johnny Carson and the Amazing Randi cooked up some basic protocols that would make it difficult for any magician to do tricks without revealing himself. And lo, Uri could do no feats.

Upon reflection, this isn't the best of analogies. But it's a beauty of a story, isn't it? And it sort or illustrates my point, if you squint, which is this:

the problems of communicating with an AI are largely that we are evolved to deal with an altogether different animal. And I have thus far, seen no real arguments that an AI researcher would be able to compensate for this, save implications that he 'knows about such things'.
>
>> As i've mentioned before, this is hardly a technical issue. All they need are sufficient understanding of the problems involved, unless you believe that the AI Researcher is likely to be able to predict personality traits within an AI.
>
>I believe your asking a very different question than Eliezer is. For
>your question technical background may not matter as much, but as
>explained above I don't think we can really examine your question
>without a better understanding of the nature of an AI.

This is a very true statement. Unfortunately, as to how to proceed, I'm not entirely sure. I'm planning on redesigning another round of experiments to deal with some of these objections, and so, I'm trying to respond to as many of these AI boxing points as I possibly can.

>
>> That may be so. But representative of what, exactly? Of SL4 subscribers? why should that be more important?
>
>Representative of people who would likely be conducting a real AI Box
>Test, of course. And, specifically, people who would have the power to
>release the AI as a result of the Box Test. This will be a very, very,
>very tiny number of humans (I would be surprised if 100 ever have this
>power - at least pre-SI). Ben Goertzel is a likely candidate. Eliezer
>Yudkowsky might also be in this position some day.

unfortunately, it's impossible to forecast with any precision the actions of individuals. So statistically, in order for there to be a testable hypothesis here, we have to generalize to a more amenable test scenario.

what do you imagine these people in AIboxy situations will share in common? can we duplicate such a population? and is SL4 really as close as we can come?

>
>> You go on to explain domain competency. This is important. However, within the context of the interview, it is more likely that technical knowledge of AI and related technologies will take a back seat to debating ability and investigative intelligence. Along with a basic dose of stubbornness, the problems inherent in bargaining for your freedom have more to do with rational discourse and insight into interaction than they do with AI, nanotech, and other fancy words.
>
>This is only true when negotiating with Humans for their release. An AI
>is a completely diferent beast and thus a completely different case. If
>you let a human out of prision you, at most, end up with a few thousand
>dead people. If you let an AI out you could be destroying the entire
>human race. This is a *very* different issue. I very much doubt that
>the vast majority of humans fully comprehend this difference and its
>potential impact. For example, less than a serious understanding of AI,
>NanoTech, etc. could lead the person to believe that the AI could
>possibly be stopped if it started down the wrong path. There is a
>subtle difference that occurs in reasoning about this issue gained after
>extensive thought on the subject. The people you choose were ill
>picked, in my opinion, for this reason.

I agree, the consequences are very intense. However, given a sufficiently determined mind, the technical issues of AI have little bearing except as motivational factors. Unfortunately, there is also no call to assume that because an AI researcher has access to data, that he will analyse it properly, or even react to it properly. I've known very informed apathetic people.

But as always, important points. In my next round of AI boxing, assuming I can scrape up the time to do them... I plan to address this domain competency issue, and round up some more... focussed individuals, if only to make the point more personal..

Would you be interested in participating in some respect, Mr Higgins? This of course goes for everybody else as well, please email me offlist if you have any ideas, or would like to help out. or just to tell me I'm off my rocker, or to give more constructive criticism. every bit helps.

Justin Corwin
outlawpoet@hell.com
"da da da...."



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT