Re: AI boxing

From: outlawpoet - (outlawpoet@stealth.hell.com)
Date: Sat Jul 27 2002 - 14:29:05 MDT


--- Dani Eder <danielravennest@yahoo.com> wrote:
>Given the propensity for people to let the AI out
>of it's box, for safety's sake the seed AI hardware
>design will need a requirement of "no way for a
>human to let the AI out".
For example, the computer
>hardware contained within a sealed box which when
>opened will release acid to destroy the hardware.
>Output devices (like monitor screens) behind glass
>windows. The room they are in full of nerve gas,
>so any attempt to hack the output devices is very
>difficult, etc. We discussed this at length
>previously, but it seems
like we need to include
>the safety provisions in a seed AI installation.

Well, the problem as I see it, is not that we always let the AI out of the
box, but that we can't accurately judge whether it is friendly or not.

Becuase the precautions have to include someway to let the AI out if it's
nice, otherwise, why are we making the AI? just so we can wave at it through
soundproof, auto-opaquing glass? Also, the big problem is not just people
letting it out, but the AI acting on the world in an unFriendly way.. so
it would have to be a prison for the AI as well, and the difficulty of imprisoning
a transhuman intelligence, even one of known origin and substrate, is difficult
to assess at best, and impossible at worst.

The challenge is not to design
a perfect prison or vault, but to create an AI that we can trust to some
reasonable degree, whether through character or design. Now character is
shifty, and while i'd love to trust ver on that, I'm afraid I probably couldn't
take the chance. So design it is. And at this point, all I've heard are
Asimov Law-type injunctions, Friendly Goalsystems, and Goertzel's Novamente's
FriendlyNode thingey. Of those, Friendliness(a la Yudkowsky) and perhaps
Goertzel's sytem are the only types I think would survive, although i'd
have to see more of the documentation on both in their specific instances
of course.

Most other schemes depend on determining the character of the
AI, which is so insanely dangerous and ill omened, I hardly have words for
it. (well, of course I have words for it, I just told you some, don't let
my propensity for hyperbole scare you off, it's just really really a bad
idea) An AI will not be a person like a human is. It's personality and goals
are likely to follow rules unintuitive and alien to us. Our social reflexes
are entirely inappropriate. And very misleading, as the AI boxing experiment
illustrates. Absolutely none of the participants let the AI out for the
right reasons, and I believe that to be because the right reasons can't
be determined by interaction at that stage and in that medium.

Justin
Corwin
outlawpoet@hell.com
"No, I don't think so. A person can be sincere and still be stupid"



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT