Re: How to make a slave (was: Building a friendly AI)

From: Robert Picone (rpicone@gmail.com)
Date: Tue Nov 20 2007 - 04:39:37 MST


On Nov 19, 2007 3:00 PM, Thomas McCabe <pphysics141@gmail.com> wrote:

> (sigh) The point of FAI theory isn't to figure out what the AGI
> *should* do. It's to get the AGI to do anything at all besides random
> destruction, and to do it predictably under recursive
> self-improvement. If we can program an AGI to reliably enhance the
> jumping abilities of lizards, and to continue following this goal even
> when given superintelligence, the most difficult part of the problem
> has already been solved.
>
> - Tom
>

And the idea of a superintelligence that is unable to change its own goals
when appropriate isn't at all disquieting to you?...

To use your example, wouldn't the resulting species of genetically modified,
technology-using monitors get a bit annoying when they start jumping over
fences and through windows to get to whatever tasty prey they can get?

Do you believe that there is any goal or set of goals out there that when
obsessed over exclusively by a smart enough being never has the potential to
do harm?



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT