From: Christian L. (n95lundc@hotmail.com)
Date: Fri Oct 18 2002 - 14:46:23 MDT
Hi,
This discussion prompts an interesting question: How would a friendly AI
respond to the will of a particular human when that very will can so easily
be changed? For example: Ben's wife claims she want to die a "natural" death
when her time is up. How does the AI respond to this?
He might either grant the wish of death by doing nothing, or ever so gently
persuade her to choose life (i.e. subtle mind control). In neither case has
the AI violated any volition, he has just changed the volition a bit in the
latter case. It could also be argued that the former case is a case of
Unfriendly behavior since it resulted in unnecessary loss of life.
And given these kinds of manipulations it is a small step to consider a
scenario where everyone is "persuaded" to live in a blissed-out wire-head
type state. Is this Friendly? I really don't know. The people would be happy
at any rate...
/Christian
Michael Roy Ames wrote:
>I believe you are (or will be) entirely correct in this opinion. Any
>interaction of an advanced transhuman with a PD human could easily involve
>extensive, detailed modelling of that human, and might intrinsicly include
>a
>correspondingly high level of "mind control" or "manipulation" (or whatever
>will
>be the latest jargon of that time) unless the transhuman makes meticulous
>efforts to avoid it. Once one comes to believe that this will be the case,
>the
>importance of Friendliness in a transhuman AI becomes overwhelmingly
>apparent.
>
>Michael Roy Ames
_________________________________________________________________
Broadband? Dial-up? Get reliable MSN Internet Access.
http://resourcecenter.msn.com/access/plans/default.asp
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT