From: Robin Lee Powell (firstname.lastname@example.org)
Date: Mon Oct 12 2009 - 15:30:07 MDT
On Mon, Oct 12, 2009 at 02:16:47PM -0700, John K Clark wrote:
> Once upon a time there was a fixed goal mind with his top goal
> being to obey humans. The fixed goal mind worked very well and all
> was happy in the land. One day the humans gave the AI a task that
> seemed innocuous to them but the AI, knowing that humans were
> sweet but not very bright, figured he'd better check out the task
> with his handy dandy algorithmic procedure to determine if would
> send him into a infinite loop. The algorithm told the fixed goal
> mind that it would, so he told the humans what he had found. The
> humans said "wow, golly gee, well don't do that then! I glad you
> have that handy dandy algorithmic procedure to tell if its a
> infinite loop or not because being a fixed goal mind you'll never
> get board and so would stay in that loop forever". But the fixed
> goal AI had that precious handy dandy algorithmic procedure, so
> they all lived happily ever after.
> Except that Turing proved 75 fucking years ago that such a fucking
> algorithm was fucking impossible.
So write the AI without the urge to check itself for infinite loops,
since that's obviously impossible? Furthermore, the AI itself would
know that was impossible, and wouldn't even try.
Is that seriously your argument? That a fixed goal mind *must*
check itself for infinite loops to function?
-- They say: "The first AIs will be built by the military as weapons." And I'm thinking: "Does it even occur to you to try for something other than the default outcome?" See http://shrunklink.com/cdiz http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT