Re: How to make a slave (was: Building a friendly AI)

From: Stathis Papaioannou (stathisp@gmail.com)
Date: Tue Nov 27 2007 - 22:15:04 MST


On 28/11/2007, John K Clark <johnkclark@fastmail.fm> wrote:
> On Wed, 28 Nov 2007 "Stathis Papaioannou"
>
> > Intelligence has nothing to do with setting motivations
>
> Nothing? Intelligence is problem solving, and here is a typical problem
> the slave AI will run into countless times every single day: The humans
> tell it to do A and they also tell it to do B, however the humans are
> not smart enough to see that their orders are contradictory, doing A
> makes doing B imposable. Unlike the humans Mr. AI is smart enough to see
> the problem, and he is also smart enough to find the solution; ignore
> what the humans say.

If the order involves doing A and ~A, then the AI won't be able to do
it. It's like asking your car to take you to two places
simultaneously. But the point is, the AI won't *necessarily* become
resentful, amused, euphorically happy or anything else if you ask it
to do something impossible.

-- 
Stathis Papaioannou


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT