RE: guaranteeing friendliness

From: H C (
Date: Fri Dec 02 2005 - 12:51:50 MST

>From: "Herb Martin" <HerbM@LearnQuick.Com>
>To: <>
>Subject: RE: guaranteeing friendliness
>Date: Wed, 30 Nov 2005 21:02:35 -0800
> > "By the way, I believe that we will create friendly AI, but
> > we will also (eventually) create unfriendly AI, either by
> > accident or by design. "
> >
> >
> > I'm just curious where you came up with that.
> >
> >
> > By my understanding, these two are mutually exclusive... in
> > the sense that
> > if you actually create an FAI, it will quickly ensure that
> > nobody create a
> > UFAI (major existential risk), and if you create a UFAI
> > first, then that
> > tends to imply humans are, well, screwed.
> >
>Well, even assuming you are correct and that the first friendly
>AI of sufficient interlligence eliminates the possibility of
>an unfriendly AI (very doubtful but as least useful as a thought
>experiment) then notice that SOMEONE would have created that
>friendly AI, and someone else somewhere will be bound to disagree.
>Do you expect everyone would agree that the US Military (or NSA,
>or whatever) has the final say on "friendly", or that some
>commercial organiation (e.g., IBM, Microsoft) has the final
>say? Just look at the people who HATE Microsoft merely for
>selling a lot of software that they don't wish to use.
>Or imagine a terrorist or terrorist supporting nation has the
>"first" friendly (in their terms) AI....
>Or, it's open source? How do you stop some human from taking
>it offline and tinkering with it until it is able to do battle
>again with the Supreme Friendly AI running the world.
>People cannot even agree on what would constitute 'friendly'
>or 'unfriendly' behavior -- if it is powerful enough to prevent
>any competitors it will by definition have to PROTECT itself
>and, almost by definition, have to evolve to counter new threats,
>at which point you cannot count on ANY pre-programmed friendly
>behavior being permanent.

Stop here.

You are making a very critical and very subtle implicit assumption here. The
AI's exponential increase in intelligence is absolutely nothing like
evolution. I don't know how to make this any more clear: If your
intelligence is running on a computer substrate, you inherently have the
ability to create cognitive tools which would enable the AI to far exceed
any human capacity. This super-powerful think tank could literally start
internet businesses or software development businesses by the hundreds and
gain real cash extremely fast. From there the think-tank switches itself to
nanotechnological development and essentially does what is necessary to
literally optimize whatever goal it has until the ends of the entire
Universe have been touched by it's Power.

Welcome to the Singularity.

>An evolving, all-powerful, friendly AI is a new species over
>which we cannot guarantee control over any sufficiently long
>period of time.
>Even if we could guarantee this, we couldn't agree on the
>definition of friendly or the evaluation of it's behavior
>yesterday when it stopped some war between two countries.
>Someone didn't get their way; perhaps someone had to die.
>Someone does not think this is friendly. If you are lucky,
>you are not on "that side."
>Herb Martin

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:53 MDT