Re: guaranteeing friendliness

From: H C (lphege@hotmail.com)
Date: Wed Nov 30 2005 - 15:56:45 MST


>From: Robin Lee Powell <rlpowell@digitalkingdom.org>
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>Subject: Re: guaranteeing friendliness
>Date: Tue, 29 Nov 2005 23:53:54 -0800
>
>On Wed, Nov 30, 2005 at 04:44:34AM +0000, H C wrote:
> > >Quick tip #3: Search the archives/google for "ai box".
> >
> > If you are going to suggest readings, then I suggest you read everything
>on
> > the Singularity Institute website.
>
>I have, actually.
>
> > As far as "AI boxes" go, yes the answer is deadfully obvious.
> > However, perhaps in the future you might give me a little more
> > credit, because on this occasion I wasn't referring to the classic
> > problem.
> >
> > In this case, the programmer is capable of directly accessing and
> > observing the unconciouss motivations, concsiouss intentions,
> > thoughts, plans, etc, and is essentially left in complete control
> > of any real-world effectual action. The AI must, as necessary for
> > any action to be carried out, submit its actions, in algorithmic
> > form (along with comments) to a panel of human judges.
>
>As far as I can see, one of two things happen:
>either the AI gets out anyways,

HOLY CRAP it's not in a box!

>or the process of it doing anything useful is so
>incredibly slow that we might as well not have bothered.

The process I described is slow for some things, and fast for others. The
utility of the AI is affected in exactly two ways. First, it must be very
specific and careful about submitting an action to the environment. This,
while slightly decreasing the speed of the AI in doing anything effective,
NECESSARILY REQUIRES it's actions to derive logically from some intentions,
and its intentions are explicitly stated. This is mutually beneficial to the
AI's development progress, as well as the Friendliness security problem.
Second, there is an amount of objective time that is subtracted from the
AI's usable time (described below).

>
>In either case, I don't see a win; I want something free to
>recursively improve quickly.

If you want it to be free to recursively improve quickly, then you, as a
programmer, won't have the time to gather sufficient evidence to make a
reasonable empirical judgment about its Friendliness.

>
>-Robin
>
>--
>http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/
>Reason #237 To Learn Lojban: "Homonyms: Their Grate!"
>Proud Supporter of the Singularity Institute - http://intelligence.org/

Having the ability to "put the AI to sleep" and run psychological tests,
scenarios, dream sequences, psychedelic experiences, and other mind-tweaks
and temporary tests (none of which the AI will remember) will provide
massive direct empirical evidence about the future actions, desires,
motivations, and plans of the AI, without the AI even being aware you know
all of this.

IF the AI were IN a box, as in the classic scenario, whether or not it could
theoretically escape the box, prison, or whatever, or if it could pretend or
persuade someone of something or other or not is not the issue I intended to
address with this thread.

--Th3Hegem0n
http://smarterhippie.blogspot.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:53 MDT