From: Peter de Blanc (email@example.com)
Date: Fri Jul 15 2005 - 19:31:15 MDT
On Fri, 2005-07-15 at 16:56 -0400, pdugan wrote:
> Here is a funny idea, what if we launch an AGI that recursively self-improved
> out the wazoo, and nothing changes at all. Say, the AGI keeps a consistent
> supergoal of minding its own buisiness and letting the world continue to
> operate with its direct intervention. Or maybe initial supergoals renormalize
> into an attitude of going with the flow, letting the wind blow as it may.
> Would such a transhuman mind classify as friendly or unfriendly?
The term 'transhuman' is inappropriate here, because an AGI with such a
goal system would erase itself long before becoming transhuman.
There's no point in creating an AGI like this. If all goes according to
plan, your AGI will erase itself, no harm done. On the other hand, if
you mess up the goal system, congratulations, you've created UFAI.
IMO, any AGI which is not explicitly Friendly should be considered
Unfriendly. Even if you successfully engineer an AGI which decides to
leave humans alone, this technology can be turned into dangerous UFAI
far more easily than it can be turned into FAI, which needs to be
designed Friendly from the beginning.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT