Re: Confidence in Friendly Singularity

From: Robin Lee Powell (
Date: Fri Jun 09 2006 - 12:08:02 MDT

On Fri, Jun 09, 2006 at 08:06:41PM +0900, Indriunas, Mindaugas
> >The problem comes down to what we make the AI desire. Humans
> >desire sex, food, truth, social standing, beauty, etc. An AI
> >might desire none of these things (except most certainly truth),
> >and yet still be capable of general, human level, adaptable
> >intelligence. It wouldn't need any of the human instincts
> >indiginous to our body (although probably will be some overlap
> >with intuitional (i.e. creative) instincts).
> I think if the intelligence would want only TO UNDERSTAND
> EVERYTHING, it's morality will grow with the understanding
> acquired, and we won't have a problem of morality at all. Trying
> to understand everything, it will definitely at some point of
> awareness try to understand "What is good and what is bad".

"Well, I looked around the universe, and most things seem to follow
"survival of the fittest" as their natural law. I guess that's what
Good is. Time to kill everyone else, I guess."


-- ***
Reason #237 To Learn Lojban: "Homonyms: Their Grate!"
Proud Supporter of the Singularity Institute -

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT