From: Miguel Azevedo (email@example.com)
Date: Mon Oct 12 2009 - 15:40:59 MDT
> Is that seriously your argument? That a fixed goal mind *must*
> check itself for infinite loops to function?
I don't really agree with John's point (I just don't see Friendly AI as
having a "fixed goal mind" or slavishly obeying human beings - Eliezer is
quite explicit about that), but IMHO he means that a "fixed goal mind"
*can't* know (as per the halting problem) which tasks will result in an
infinite loop, so eventually it *will* fall into an infinite loop and cease
-- - Heute die Welt, morgen das SonnenSystem
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT