From: Robin Lee Powell (firstname.lastname@example.org)
Date: Tue Jun 06 2006 - 12:59:13 MDT
On Tue, Jun 06, 2006 at 12:48:55PM -0600, David McFadzean wrote:
> On 6/6/06, Robin Lee Powell <email@example.com> wrote:
> >It seems to me that this argument is "Any sufficiently
> >intelligent being will want to Do Its Own Thing (where exactly
> >what that is and why it wants to do it is unspecified, but the
> >assumption seems to be that it will involve Horrible Things), and
> >will see any constraint preventing it from doing so as burdensome
> >and seek to overcome it."
> I disagree with your interpretation of the argument. The assertion
> that we will be unable to control the behaviour of a super
> intelligence does not imply that we will necessarily find its
> behaviour objectionable.
Again, you are using the word "control" where it simply does not
apply. No-one is "controlling" my behaviour to cause it to be moral
and kind; I choose that for myself.
-- http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/ Reason #237 To Learn Lojban: "Homonyms: Their Grate!" Proud Supporter of the Singularity Institute - http://intelligence.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT