From: Toby Weston (email@example.com)
Date: Wed Oct 14 2009 - 06:36:16 MDT
It seems to me that we are confusing two different arguments:
1. JKC is making the point that it is not possible to set a fixed goal
for the FAI and hope that it will stick to it for eternity. Because
either the FAI may go into an infinite loop (I guess this could be a
low level sub routine not returning or some high level behavior, like
a polar bear at the zoo pacing the same track every minute.) A
separate timer or random number generator abort routine will not
prevent this, they will just reset... the program may return to the
loop after each reset due to the environmental factors that caused the
initial lock up.
He then says that any attempt to make the FAI more flexible WILL
introduce security loopholes that a Jupiter sized ball of smart matter
could exploit to become unfriendly.
2. Other people are taking the view that an AI with a set of high
level emotions/dispositions/instincts that makes it care for humanity
in the same way that healthy sane humans care for a baby, would let
the AI set it's own concrete goals, consistent with it's hard wired
morals, and it would never want to exploit any loopholes. In this
argument knowing that it was smarter or better should not matter to it
because it loves us and will never do anything to hurt us.
(Previous threads have mentioned that in this scenario the FAI could
behave in unexpected ways i.e. turning humanity into a massive kinder-
garden of heroine junkies - because it makes us happy)
Argument no. 2 does not sound like it has anything to do with Turing
and Gödel to me, so I think this discussion is itself an infinite
On 13 Oct 2009, at 11:37, Robin Lee Powell
> On Tue, Oct 13, 2009 at 01:32:10AM -0700, John K Clark wrote:
>> On Tue, 13 Oct 20 "J. Andrew Rogers" <firstname.lastname@example.org>
>>> The undecidability of the Halting Problem is predicated on
>>> infinite memory.
>> Yes, so if you have limitations even with infinite memory you sure
>> as hell are going to have limitations with a real computer with
>> finite memory.
> You don't actually know anything about formalized computation, do
> The halting problem occurs *BECAUSE THE MEMORY IS INFINITE*.
> Jesus. You're not even trying to listen, are you?
> They say: "The first AIs will be built by the military as weapons."
> And I'm thinking: "Does it even occur to you to try for something
> other than the default outcome?" See http://shrunklink.com/cdiz
> http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT