Re: [sl4] I am a Singularitian who does not believe in the Singularity

From: Robin Lee Powell (
Date: Wed Sep 30 2009 - 14:31:35 MDT

On Wed, Sep 30, 2009 at 12:03:40PM +0200, Giulio Prisco (2nd email) wrote:
> Some consider the coming intelligence explosion as an existential
> risk. Superhuman intelligences may have goals inconsistent with
> human survival and prosperity. AI researcher Hugo de Garis
> suggests AIs may simply eliminate the human race, and humans would
> be powerless to stop them. Eliezer Yudkowsky and the Singularity
> Institute for Artificial Intelligence propose that research be
> undertaken to produce friendly artificial intelligence (FAI) in
> order to address the dangers. I must admit to a certain skepticism
> toward FAI: if super intelligences are really super intelligent
> (that is, much more intelligent than us), they will be easily able
> to circumvent any limitations we may try to impose on them.

That's why imposing limitations is a fail, as has been written about
extensively. The point is to make AIs that want to be nice to
humans in exactly the same way that humans tend to want to be nice
to babies.

If you saw a random baby lying on the sidewalk, you would not kill
it. This is a "limitation" in the human architecture. Do you find
yourself fighting against this built-in limitation? Do you find
yourself thinking, "You know, my life would be so much better if I
wanted to kill babies. I really want to want to kill babies"? No,
of course you don't. You don't see it as a limitation; it's just a
part of who you are. The goal of SIAI is to build AIs that feel the
same way about humans.


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT