Re: How to make a slave (was: Building a friendly AI)

From: Stathis Papaioannou (
Date: Tue Nov 20 2007 - 03:41:22 MST

On 20/11/2007, John K Clark <> wrote:

> > If we can program an AGI to reliably enhance the jumping
> > abilities of lizards, and to continue following this goal
> > even when given superintelligence, the most difficult
> > part of the problem has already been solved.
> I believe the above is in English, I say this because I recognize all
> the words and they seem to be organized according to the laws of English
> grammar; but if the sentence has any meaning it escapes me.

You seem to be saying that an AI, in addition to its ability in
figuring stuff out, will have certain goals but not others. Where will
these goals come from and on what basis are the good goals
distinguishable from the bad?

Stathis Papaioannou

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT