From: Russell Wallace (firstname.lastname@example.org)
Date: Fri Jul 15 2005 - 20:39:26 MDT
On 7/15/05, pdugan <email@example.com> wrote:
> Here is a funny idea, what if we launch an AGI that recursively self-improved
> out the wazoo, and nothing changes at all. Say, the AGI keeps a consistent
> supergoal of minding its own buisiness and letting the world continue to
> operate with its direct intervention. Or maybe initial supergoals renormalize
> into an attitude of going with the flow, letting the wind blow as it may.
> Would such a transhuman mind classify as friendly or unfriendly?
Assuming it really didn't affect anything and yet we had some way of
knowing it had "recursively self-improved out the wazoo" to the point
where it _could_ have affected things, I'd classify it as:
"Zounds! That one blew up on the launch pad... but fortunately it
didn't kill anyone. Let's go over what happened and be a lot more
damned sure of what we're doing before we fire the next one."
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT