Re: How to make a slave (was: Building a friendly AI)

From: John K Clark (
Date: Tue Nov 20 2007 - 02:05:22 MST

On Mon, 19 Nov 2007, "Thomas McCabe" <> wrote:

> (sigh)


> The point of FAI theory isn't to figure out what the AGI

I said it before I’ll say it again, I just don’t understand what
Adjusted Gross Income has to do with what we were discussing.

> do it predictably under recursive
> self-improvement.

Yes, that is the friendly AI idea; you expect to understand and control
an intelligence a million times as smart as you that is on the fast
track to becoming a billion times as smart as you, and soon after that a
trillion times, and you expect to enslave this awesome force of nature
from now to the end of time. And you think this is not only possible but
moral. I expect if I really put my mind to it I could dream up an even
stupider idea, but it wouldn’t be easy, you’d have to give me some time.

> If we can program an AGI to reliably enhance the jumping
> abilities of lizards, and to continue following this goal
> even when given superintelligence, the most difficult
> part of the problem has already been solved.

I believe the above is in English, I say this because I recognize all
the words and they seem to be organized according to the laws of English
grammar; but if the sentence has any meaning it escapes me.

 John K Clark

  John K Clark
-- - Access your email from home and the web

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT