Re: Effective(?) AI Jail

From: James Higgins (jameshiggins@earthlink.net)
Date: Tue Jun 19 2001 - 12:53:30 MDT


The most likely scenario is that the SI will play nice for a period of
time, unless it is so completely hostile that it is unable to hide its true
motives. So you walk in with your .45 and it says all the nice happy
things you want to hear, so you give Eliezer the thumbs up. So more people
talk to this thing, over and over again. Then, possibly years later, it
has much more freedom, is at least somewhat trusted and has much greater
access to converse with people. Now it looks for the one person that it
has a greater than 99% chance of convincing to let it out (or at least make
it possible for it to escape).

For this reason I don't believe it would ever be possible to prove that any
given SI was friendly.

At 06:18 PM 6/14/2001 -0500, you wrote:
>I think this is a good question, but I think there are good answers to it.
>:-)
>
>I was mostly joking about the harsh approach, but only to make a
>point. Not all
>of us are gullible weaklings ready to be tricked into turning the Universe
>into
>oblivion. And the serious point here is that if we're really afraid that
>the thing
>might be that bad, it will understand the dilemma that we face and the
>fact that we
>have to be really careful for our own good.
>
>Is it moral to keep an SI in a box until we are reasonably sure that it
>isn't going
>to destroy us? Yes. If it turns out that it is a sentient being
>hell-bent on our
>destruction, then again it is moral to destroy it.
>
>I'm not Yudkowsky-Friendly, even though I'm friendly.
>
>--
>*************************************************
>* http://www.nupedia.com/ *
>* The Ever Expanding Free Encyclopedia *
>*************************************************



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT