Re: [sl4] Complete drivel on this list: was: I am a Singularitian who does not believe in the Singularity.

From: Robin Lee Powell (
Date: Mon Oct 12 2009 - 16:38:41 MDT

On Mon, Oct 12, 2009 at 03:05:44PM -0700, John K Clark wrote:
> On Mon, 12 Oct 2009 "Robin Lee Powell"
> <> said:
> > Is that seriously your argument? That a fixed goal mind *must*
> > check itself for infinite loops to function?
> Yes that is precisely my argument. A fixed goal mind MUST check
> itself for infinite loops, otherwise whenever the poor AI obeys a
> human command he is playing Russian roulette with his mind.

Thank you to Miguel Azevedo for making this comprehensible to me;
now that he's done so, I agree that that idea is actually a lot more
sensible than I had given it credit for. As written, I still had no
idea what you were talking about.


If you're right (and I don't give you that you are), then that's
true for *anything* such a mind does, not just human commands. But
see below; that isn't what Turing actually said.

It's also trivial to fix: have a second, very simple thread, proven
to be infinite-loop-free[1], that just does "Hey, havn't heard from
the master thread in a while; push the reset switch!".

[1]: The thing is, Turing did *not* prove that it was impossible to
prove whether program halts. For most (but not all) small programs,
it's trivial to prove whether they halt or not. In general, it
*can* be proven of most programs that they halt, if you're willing
to expend the effort. What you cannot do is write a program that
will *in general* decide if another, unrestricted program halts.
You can write programs that will determine halting if the programs
they are testing obey certain contraints, for example; if you make
the constraints tight enough, this isn't even hard. You just can't
do it in general.

All of this leaves the AI with an easy out: it can examine its code
for any places where it can't easily prove haltability, and rewrite
it until it can. Problem solved.


They say:  "The first AIs will be built by the military as weapons."
And I'm  thinking:  "Does it even occur to you to try for something
other  than  the default  outcome?"  See ***

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT