Re: In defense of Friendliness

From: Samantha Atkins (samantha@objectent.com)
Date: Fri Oct 18 2002 - 20:59:03 MDT


Christian L. wrote:
> Hi,
>
> This discussion prompts an interesting question: How would a friendly AI
> respond to the will of a particular human when that very will can so
> easily be changed? For example: Ben's wife claims she want to die a
> "natural" death when her time is up. How does the AI respond to this?

Why is it any of the AI's business? Why would the AI believe it
should be concerned about this decision?

>
> He might either grant the wish of death by doing nothing, or ever so
> gently persuade her to choose life (i.e. subtle mind control). In
> neither case has the AI violated any volition, he has just changed the
> volition a bit in the latter case. It could also be argued that the
> former case is a case of Unfriendly behavior since it resulted in
> unnecessary loss of life.
>

<minor nit>"He" doesn't apply to an AI. If the AI even thought
that the decision of a human being on so private a matter was
any of its business it is already of questionable Friendliness.
   The AI in principle can change decisions anytime it wishes
but then it runs the humans instead of they having their own lives.

> And given these kinds of manipulations it is a small step to consider a
> scenario where everyone is "persuaded" to live in a blissed-out
> wire-head type state. Is this Friendly? I really don't know. The people
> would be happy at any rate...

It is not at all friendly. It is also very unimaginative. The
people would not be happy.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT