Re: The Taoist Transhuman

From: Robin Lee Powell (
Date: Fri Jul 15 2005 - 15:17:08 MDT

On Fri, Jul 15, 2005 at 04:56:12PM -0400, pdugan wrote:
> Here is a funny idea, what if we launch an AGI that recursively
> self-improved out the wazoo, and nothing changes at all. Say, the
> AGI keeps a consistent supergoal of minding its own buisiness and
> letting the world continue to operate with its direct
> intervention. Or maybe initial supergoals renormalize into an
> attitude of going with the flow, letting the wind blow as it may.
> Would such a transhuman mind classify as friendly or unfriendly?

Neither, I'd say, but if I had to pick one, Friendly. No question.

"UnFriendly Superintelligent AI", to me, means "being that poses a
serious threat to the continued existence of life in its vicinity".


-- ***
Reason #237 To Learn Lojban: "Homonyms: Their Grate!"
Proud Supporter of the Singularity Institute -

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT