Re: How to make a slave (was: Building a friendly AI)

From: Emlyn (emlynoregan@gmail.com)
Date: Mon Dec 03 2007 - 22:57:20 MST


On 20/11/2007, Robert Picone <rpicone@gmail.com> wrote:
> On Nov 19, 2007 3:00 PM, Thomas McCabe <pphysics141@gmail.com> wrote:
>
> > (sigh) The point of FAI theory isn't to figure out what the AGI
> > *should* do. It's to get the AGI to do anything at all besides random
> > destruction, and to do it predictably under recursive
> > self-improvement. If we can program an AGI to reliably enhance the
> > jumping abilities of lizards, and to continue following this goal even
> > when given superintelligence, the most difficult part of the problem
> > has already been solved.
> >
> > - Tom
> >
>
> And the idea of a superintelligence that is unable to change its own goals
> when appropriate isn't at all disquieting to you?...
>
> To use your example, wouldn't the resulting species of genetically modified,
> technology-using monitors get a bit annoying when they start jumping over
> fences and through windows to get to whatever tasty prey they can get?
>
> Do you believe that there is any goal or set of goals out there that when
> obsessed over exclusively by a smart enough being never has the potential to
> do harm?
>

Several hours after the intelligence augmented lizards wipe out the
last of humanity, they use their newfound god given abilities to jump
into hyperspace...

-- 
Emlyn
http://emlynoregan.com


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:01 MDT