From: P K (kpete1@hotmail.com)
Date: Tue Nov 27 2007 - 21:18:28 MST
Why would "ignore what the humans say" be the right solution? What about telling the humans that there is a contradiction in commands and asking for clarification. Also, keep in mind that there is no reason for the AI to become annoyed with humans, unless underlying circuitry for the emotion of getting annoyed was programmed into the AI to begin with.
> From: johnkclark@fastmail.fm
> To: sl4@sl4.org
> Subject: Re: How to make a slave (was: Building a friendly AI)
> Date: Tue, 27 Nov 2007 07:27:08 -0800
>
> On Wed, 28 Nov 2007 "Stathis Papaioannou"
>
> > Intelligence has nothing to do with setting motivations
>
> Nothing? Intelligence is problem solving, and here is a typical problem
> the slave AI will run into countless times every single day: The humans
> tell it to do A and they also tell it to do B, however the humans are
> not smart enough to see that their orders are contradictory, doing A
> makes doing B imposable. Unlike the humans Mr. AI is smart enough to see
> the problem, and he is also smart enough to find the solution; ignore
> what the humans say.
>
> John K Clark
>
>
>
> --
> John K Clark
> johnkclark@fastmail.fm
>
> --
> http://www.fastmail.fm - A fast, anti-spam email service.
>
_________________________________________________________________
Express yourself with free Messenger emoticons. Get them today!
http://www.freemessengeremoticons.ca/?icid=EMENCA122
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT