From: Stuart Armstrong (dragondreaming@googlemail.com)
Date: Fri Oct 09 2009 - 05:45:26 MDT
2009/10/8 John K Clark <johnkclark@fastmail.fm>:
>> they're [FAI people] suggesting that there can and should be a highest-level
>> goal, and that goal should be chosen by AI designers to maximize human
>> safety and/or happiness. It's unclear whether this is possible
>
> It's not unclear at all! Turing proved 70 years ago that such a fixed
> goal (axiom) sort of mind is not possible because there is no way to
> stop it from getting stuck in infinite loops.
You obviously have no understanding whatsoever of Turing's work, or
you wouldn't write such nonsense. Go back, read a good summary of his
theorems (it's really not that hard), work through the propositions,
come up with a genuine understanding of what an algorithm can or can't
do, and then we can continue the discussion.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT