Re: AI Boxing: http://www.sl4.org/archive/0207/4977.html

From: Vladimir Nesov (robotact@gmail.com)
Date: Sat May 31 2008 - 03:27:54 MDT


On Sat, May 31, 2008 at 4:39 AM, justin corwin <outlawpoet@gmail.com> wrote:
> Interesting, but I got the first inquiry 9 days ago. A curious trend.
>

It was referenced by Eliezer on Overcoming Bias several days ago, must be it.

I feel that the conclusion of that old discussion, which I didn't have
a chance to participate in, is rather misguided. However obvious it
may be, if AI locked in the box is sane enough to understand a complex
request like "create a simple theory of Friendliness and hand it
over", it can be used for this purpose. This AI is not intended to be
released at all, at least before the Friendly one build according to
that design, if design proves reasonable, assumes the position of
SysOp or the like. Even if building an AI that can actually understand
what you mean by requesting Friendliness theory is 99.999% of the way
there, the actual step of using a boxed setup to create a reliable
system may still be needed.

Forcing one's way past an irrational gatekeeper, influenced by
religious ideas or not understanding the setup from
non-anthropomorphic perspective, doesn't seem very impressive. I can't
extract any meaning from those old experiments if I don't know what
happened -- I'm confident enough that it can't happen to me, and I
simply assume folly on the part of the gatekeeper, not unknown
unknowns as advertised. It's almost as in the Newcomb's paradox, you
are able to make a decision in advance, and changing it later is
always an error, because you lose even if your ritual of reasoning
shouts otherwise.

-- 
Vladimir Nesov
robotact@gmail.com


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT