From: Vladimir Nesov (firstname.lastname@example.org)
Date: Thu Oct 08 2009 - 11:37:08 MDT
On Thu, Oct 8, 2009 at 9:25 PM, John K Clark <email@example.com> wrote:
> On Thu, 8 Oct 2009 16:12:02 +0000, "Randall Randall"
>> they're [FAI people] suggesting that there can and should be a highest-level
>> goal, and that goal should be chosen by AI designers to maximize human
>> safety and/or happiness. It's unclear whether this is possible
> It's not unclear at all! Turing proved 70 years ago that such a fixed
> goal (axiom) sort of mind is not possible because there is no way to
> stop it from getting stuck in infinite loops.
You shouldn't make technical claims on conceptually confusing basis.
How to interface incompleteness with "goals" and "infinite loops" and
"mind" and goals being "fixed" is far from being obvious or having a
single solution, so you can be either right or wrong depending on how
one uses imagination to interpret your statement, which makes it
-- Vladimir Nesov
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT