Re: In defense of Friendliness

From: Yedidya Weil (yedweil@yahoo.com)
Date: Fri Oct 18 2002 - 18:53:08 MDT


--- "Christian L." <n95lundc@hotmail.com> wrote:
> Hi,
>
> This discussion prompts an interesting question: How
> would a friendly AI
> respond to the will of a particular human when that
> very will can so easily
> be changed? For example: Ben's wife claims she want
> to die a "natural" death
> when her time is up. How does the AI respond to
> this?
>
> He might either grant the wish of death by doing
> nothing, or ever so gently
> persuade her to choose life (i.e. subtle mind
> control). In neither case has
> the AI violated any volition, he has just changed
> the volition a bit in the
> latter case. It could also be argued that the former
> case is a case of
> Unfriendly behavior since it resulted in unnecessary
> loss of life.

How about the AI informing the person that ve (the AI)
thinks the person is incorrect in this opinion, and
the AI is available for discussion about it. Then the
person can choose whether to open up to being
persuaded by a higher intelligence or would rather
not. Volition is kept, and the system lets the subject
choose how much ve is to be enlightened. So the system
cannot be held accountable for the loss of life, since
it gave the person a chance to be convinced, the
person is basically taking their own life. Which I
think everybody here agrees is a natural right.

Yedidya

__________________________________________________
Do you Yahoo!?
Faith Hill - Exclusive Performances, Videos & More
http://faith.yahoo.com



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT