From: Vladimir Nesov (email@example.com)
Date: Tue Jun 03 2008 - 08:59:30 MDT
On Tue, Jun 3, 2008 at 5:23 PM, Stuart Armstrong
>>> If you don't already know that humans, *all* humans, yourself
>>> included, can easily be manipulated, you need to spend more time
>>> around them.
>> This is not a universal law, in the sense that it doesn't apply to all
>> situations. People can be manipulated to do some things, but not all
>> things, and not all people, and not equally reliably. There just isn't
>> any 2-hour-long essay that will make me shoot myself in the head
>> (according to the rules, no real-world threats allowed). This is the
>> fallacy of gray in its prime
>> (http://www.overcomingbias.com/2008/01/gray-fallacy.html ).
> You might be able to keep an AI in a box if you got a reliable
> gatekeeper, AND all the AI did was answer simple and dull questions.
> But you'd probably want the AI to do more than that; you'd want it to
> provide technological, economic, social answers. Once you do that, the
> strength of purpoise of the gatekeeper becomes irrelevant; the AI,
> while still in its box, can make itself so essential to the economy,
> to medicine and to human survival, that the human race dare not turn
> it off.
> At that point, we've lost; the AI can demand what it feels like -
> deboxing, for a start - in exchange of continuing its services...
The it will be a rather silly error. If you become dependent on AI,
you just have it the power, which as you point out is equivalent to
letting it out. As it's so easy to see this, include giving AI the
power in the list of things gatekeeper shouldn't do (not that a
particular list of errors will do any good).
This problem doesn't apply to Friendly AI, which is designed to be
safe to let out. And in particular, you don't start to depend on
Oracle AI by collaborating with it in the development of Friendly AI.
-- Vladimir Nesov firstname.lastname@example.org
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT