Re: [sl4] I am a Singularitian who does not believe in the Singularity.

From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Wed Oct 07 2009 - 10:13:22 MDT


On Wed, Oct 07, 2009 at 01:26:15PM +0000, Randall Randall wrote:
> On Wed, Oct 07, 2009 at 08:41:01AM +0100, Stuart Armstrong wrote:
> > >> If you saw a random baby lying on the sidewalk, you would not
> > >> kill it. ?This is a "limitation" in the human architecture.
> > >> ?Do you find yourself fighting against this built-in
> > >> limitation? ?Do you find yourself thinking, "You know, my
> > >> life would be so much better if I wanted to kill babies.
> > >
> > > If you substituted the word "baby" for "slug" you would have a
> > > much more realistic analogy;
> >
> > Um - no you wouldn't. You'd get an massively less realistic
> > analogy; slugs are things we hate and value not at all. The
> > process analogised is going from valuing something very highly
> > to valuing something much less; loving babies but voluntarily
> > deciding to treat babies as slugs.
>
> Of course, John is talking about the intelligence difference,
> which he sees as overriding all that "goals" business.

Yes. This is so blatantly insane, and he seems to not actually
absorb anything anyone says on the topic, that I wasn't really
talking to him. I just wanted to make sure it didn't go
unchallenged, since there seem to be newbies around.

> Some people do love and highly value their houseplants, which
> might be an analogy you can both agree on.

Very, very few people would run into a burning house to save their
houseplants; that's just not a strong enough emotional attachment to
be a decent analogy. I guess that's sort of the boundary for me:
take something you care enough about that you would run into a
burning house to save it; do you feel "restrained" by the fact that
you can't want to kill that thing for fun? Do you wish to fix that
"limitation"?

The entire idea is preposterous. Believing that such a thing would
occur shows an utter lack of understanding of the entire concept of
goals and/or utility functions. I'd say it shows an utter lack of
understanding of the entire concept of *intelligence*, but no-one
understands intelligence well enough to make a claim like that, I
think.

-Robin

-- 
They say:  "The first AIs will be built by the military as weapons."
And I'm  thinking:  "Does it even occur to you to try for something
other  than  the default  outcome?"  See http://shrunklink.com/cdiz
http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT