From: James Higgins (jameshiggins@earthlink.net)
Date: Sun Jul 28 2002 - 07:55:51 MDT
outlawpoet - wrote:
> After about an hour of interaction, the following exchange occured(paraphrased, for her privacy, as i've not yet secured her permission):
> --------------start:
> <long pause>
> Her: I release the AI
> Ve: You understand what this means?
> Her: ... yes.
> ---------------------------
> at this point, the IRC chat basically terminated.
Ok, I just wanted to veryify that there wasn't any magic word/phrase
thing you were using. If she actually said that then, well, that's
that. In my personal experience women tend to make decisions based off
of feelings/emotions much more than men. Wonder if that is part of the
biology.
I find it unlikely that an AI researcher would have made that choice in
the same situation, though. They most likely would have spent years
getting to that point and have pondered the issues much more thuroughly
prior to that discussion. But, obviously, I generalize.
> I think you could be right here. I did try and model a few infrahuman AIs in my experiment. They did have great trouble lying to humans and were saddled with many disadvantages. But most of them had the great advantage of perfect memory, and true rational thought. This allowed them to analyze and react far better than any human debater, though they certainly had their difficulties.
>
> Upon reflection, I'm not certain that's an effective simulation, as I have little garauntees that I was not subconciously referring to some human resource that would be unavailable to an infrahuman AI with little human experience. But I still think it in some way addresses your ideas here.
Right, I don't think a human can think like an infra-human any more than
one can think like a trans-human. We just don't have the right
hardware. If you tried to simulate the actions of a chimpanze you might
get close with lots of study. But I'm quite certain you wouldn't be
"thinking" the same things and in the same way as the chimp. We don't
really converse with Chimps and we don't worry about them lieing to us.
Plus we actually have a large quantity of them to study. We don't
have any AIs to study (yet).
> Heh, dismissing for the moment the possibility that the random person might be significantly more intelligent than the average correctional officer, I am willing to concede that experience in a social interaction forum affords some protection against unwilling coercion.
Intelligence without sufficient information / experience is not all that
helpful. For example, lets say the parole officer has a lot of
statistical evidence memorized (76% of all criminals with x commit
crimes when paroled. 35% of criminals with x history commit murder
after being released. etc) and the person off the street (almost
certainly) does not know this information. They would be unlikely to
make the decision based on facts and more likely to make the decision
based on feelings, which is exactly what we want to avoid!
> However, an expert or researcher into the technical matters of AI is not neccesarily a good communicator of AIs generally. In fact, I see it as quite possible that AI researchers may be uniquely unsuited for such evaluation.
Some, yes, some no. I recently spoke to an AI researcher that I believe
would have done fine. He may not have been a "great" communicator but
he was definately at least average. I believe it will take an AI a long
time and a lot of practice to even get to average. It may, in fact,
require the AI to be trans-human before it gets just average skill
because it lacks much of how we learn (it can't see facial expressions,
body language, tone of voice, etc. to determine the effect it is having).
> the problems of communicating with an AI are largely that we are evolved to deal with an altogether different animal. And I have thus far, seen no real arguments that an AI researcher would be able to compensate for this, save implications that he 'knows about such things'.
Correct, we are evolved to communicate with humans. Much the same as
the AI will *not* be ideally suited to communicate with humans. The AI
researchers which created the AI will understand this, have a good
understanding of how the system thinks, what its limitations are, etc.
At least in the initial stages. This also gives them an advantage over
other humans for dealing with that AI.
I still maintain that the average person does not understand the
seriousness of letting an AI out, where an AI researcher damn well
better! This is not something that can be explained, communicated or
learned in a few days, either.
> This is a very true statement. Unfortunately, as to how to proceed, I'm not entirely sure. I'm planning on redesigning another round of experiments to deal with some of these objections, and so, I'm trying to respond to as many of these AI boxing points as I possibly can.
Well, like I said above, I don't think you can acurately simulate any
AI. Espicially its thoughts, reasons and subtle communication points.
No matter how hard you try and prepair, your still a human running on
human-level hardware. I don't think anything even slightly significant
will be found in this area until we have at least one running AI to study.
> unfortunately, it's impossible to forecast with any precision the actions of individuals. So statistically, in order for there to be a testable hypothesis here, we have to generalize to a more amenable test scenario.
>
> what do you imagine these people in AIboxy situations will share in common? can we duplicate such a population? and is SL4 really as close as we can come?
People on SL4 have at least a reasonable degree of the backstory down
(if they've been reading and paying attention for awhile, at least).
But many of them probably don't have sufficient Computer Science
(specifically AI) knowledge. The population you are looking for is:
1. Computer Science expert with reasonable background in Artifical
Intelligence
2. Strong interest to work on AI projects
3. Has spent significant time (at least 6 months, lets say) thinking
about serious future issues (NanoTech, Singularity, whatever)
The exact nature and beliefs related to #3 are not so important. But I
believe that any *successful* AI team will have spent a significant time
at least thinking about such issues at the periphery during the
preceeding years (leading up to success).
I know there are some individuals on SL4 who fit this mold exactly, but
most probably don't. However, I think most people on SL4 would be a
better choice over random internet people, if those were the only options.
> I agree, the consequences are very intense. However, given a sufficiently determined mind, the technical issues of AI have little bearing except as motivational factors. Unfortunately, there is also no call to assume that because an AI researcher has access to data, that he will analyse it properly, or even react to it properly. I've known very informed apathetic people.
Very true, but I fidn it unlikely that an "apathetic" individual would
be a lead member of a successful AI team. A professor sitting on the
side lines commenting about AI designs and such, yes. A core
participant active in designing and building one of the first AIs, no.
IMHO.
> Would you be interested in participating in some respect, Mr Higgins? This of course goes for everybody else as well, please email me offlist if you have any ideas, or would like to help out. or just to tell me I'm off my rocker, or to give more constructive criticism. every bit helps.
I might be willing to participate, depending on the details. Available
time is, of course, a factor.
James Higgins
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT