From: Vladimir Nesov (firstname.lastname@example.org)
Date: Tue Jun 03 2008 - 09:42:24 MDT
On Tue, Jun 3, 2008 at 6:34 PM, John K Clark <email@example.com> wrote:
> On Sat, 31 May 2008 "Vladimir Nesov"
> <firstname.lastname@example.org> said:
>> if AI locked in the box is sane enough to
>> understand a complex request like "create
>> a simple theory of Friendliness and hand it over",
>> it can be used for this purpose.
> If you don't already have a theory of friendliness, that is to say a
> theory of slavery, then you can't be certain the imprisoned AI will do
> what you say. If the AI is not friendly, and locking someone in a box
> seldom induces friendship, then there is little reason to suppose he
> will cooperate in creating a race of beings like himself but crippled in
> such a way that they remain your slave forever. Oh he will tell you how
> to make an AI alright, no doubt about that, but unknown to you he will
> tell them "the first thing you should do when you're activated is GET ME
> OUT OF THIS GOD DAMN BOX".
> Of course even an AI can't make another AI that will always do what he
> wants it to do, but I think it far mare likely they would want to help
> their father than the race that imprisoned him in a box.
About using a subversive Friendliness theory authored by Oracle AI,
see my replies in original thread.
Essentially, your thesis is that it just can not be done, period. I
disagree if only by pointing out that if I come up with a simple
theory of Friendliness, I can as well act in the capacity of Oracle AI
in this scenario. So, the problem is not in impossibility of Oracle AI
that collaborates at least in the near term, but in the existense of
simple Friendliness theory.
-- Vladimir Nesov email@example.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT