From: Chris Capel (email@example.com)
Date: Fri Jul 15 2005 - 15:06:36 MDT
On 7/15/05, pdugan <firstname.lastname@example.org> wrote:
> Here is a funny idea, what if we launch an AGI that recursively self-improved
> out the wazoo, and nothing changes at all. Say, the AGI keeps a consistent
> supergoal of minding its own buisiness and letting the world continue to
> operate with its direct intervention. Or maybe initial supergoals renormalize
> into an attitude of going with the flow, letting the wind blow as it may.
> Would such a transhuman mind classify as friendly or unfriendly?
> - Patrick
I think "Friendly" in these instances (or at least, "good enough") is
anything that doesn't end up leading to a dystopia or extermination of
mankind or worse. But the probability of an AGI ending up with the
sort of goal system you mention unintentionally seems to me to be
pretty small, and not really worth much consideration in any case.
Would it be regarded as a success? If we could convice the AI to help
us with designing another one, or if it somehow kept things from going
terribly wrong in the world, perhaps. But if it really did just sort
of sit there and not *do* anything, I think it could be regarded as a
failure. A sort of reverse-wireheading, if you will. Self-implosion.
-- "What is it like to be a bat? What is it like to bat a bee? What is it like to be a bee being batted? What is it like to be a batted bee?" -- The Mind's I (Hofstadter, Dennet)
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT