RE: [sl4] I am a Singularitian who does not believe in the Singularity.

From: Bradley Thomas (brad36@gmail.com)
Date: Thu Oct 08 2009 - 10:46:19 MDT


Richard pointed out (correctly in my view) that from Smith & Medin, concepts
do not have crisply defined sets of features and relationships. I wonder how
a top-level goal is different from a concept. In other words, I see the same
difficulty in finding a crisp definition there as with any other concept.

Brad Thomas
www.bradleythomas.com
Twitter @bradleymthomas, @instansa
 

-----Original Message-----
From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org] On Behalf Of Randall
Randall
Sent: Thursday, October 08, 2009 12:12 PM
To: sl4@sl4.org
Subject: Re: [sl4] I am a Singularitian who does not believe in the
Singularity.

On Thu, Oct 08, 2009 at 08:07:33AM -0700, John K Clark wrote:
> On Wed, 07 Oct 2009 13:32:59 -0500, "Pavitra"
> <celestialcognition@gmail.com> said:
> > I would expect a given intelligence to have a
> > sense of absurdity if and only if it was evolved/designed to detect
> > attempts to deceive it.
>
> And of course the AI IS being lied to, told that human decisions are
> wiser than its own; and a AI that has the ability to detect this
> deception will develop much much faster than one who does not.

While I agree that an AGI will undoubtedly be lied to about something, I
don't think those in the Friendliness camp are suggesting that it be lied
to, or told that human decisions are wiser than its own.

Rather, they're suggesting that there can and should be a highest-level
goal, and that goal should be chosen by AI designers to maximize human
safety and/or happiness. It's unclear whether this is possible, but if it
is possible, and if the AI's goal system is structured this way,
then *someone* will have to choose a highest-level goal, and since the AI
won't want to change it (by definition, since it's the highest
possible goal), it'll be stable except via accident or outside changes.

Now, it's entirely arguable whether such a goal system is possible to build
(as a stable system, etc), but it doesn't make any sense to
accept the Friendliness camp's assumption that it is possible and
then argue that the AI will magically discard the goal because it's so much
more intelligent. A highest level goal guides intelligence; it isn't
subject to argument or examination.

I think one of the reasons this is difficult is that humans do not appear to
have a goal system which is structured in this way, so we can examine and
object to *any* goal we have, and are thus much less reliable than any
entity with such a goal system.

--
Randall
 


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT