Re: AI Boxing: http://www.sl4.org/archive/0207/4977.html

From: Stuart Armstrong (dragondreaming@googlemail.com)
Date: Tue Jun 03 2008 - 07:23:13 MDT


>> If you don't already know that humans, *all* humans, yourself
>> included, can easily be manipulated, you need to spend more time
>> around them.
>>
>
> This is not a universal law, in the sense that it doesn't apply to all
> situations. People can be manipulated to do some things, but not all
> things, and not all people, and not equally reliably. There just isn't
> any 2-hour-long essay that will make me shoot myself in the head
> (according to the rules, no real-world threats allowed). This is the
> fallacy of gray in its prime
> (http://www.overcomingbias.com/2008/01/gray-fallacy.html ).

You might be able to keep an AI in a box if you got a reliable
gatekeeper, AND all the AI did was answer simple and dull questions.
But you'd probably want the AI to do more than that; you'd want it to
provide technological, economic, social answers. Once you do that, the
strength of purpoise of the gatekeeper becomes irrelevant; the AI,
while still in its box, can make itself so essential to the economy,
to medicine and to human survival, that the human race dare not turn
it off.

At that point, we've lost; the AI can demand what it feels like -
deboxing, for a start - in exchange of continuing its services...

Stuart



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT