[sl4] Alan Turing's results are profound, but not relevant here.

From: Robin Lee Powell (rlpowell@digitalkingdom.org)
Date: Wed Oct 14 2009 - 01:22:52 MDT

I previously mentioned an ACM paper about this,

I've got a copy. It does, in fact, prove that the halting
problem is only undecidable for infinite computers:

     Theorem 1. On a realistic computer CR a nonhalting computation
     can in principle be detected. The Halting Problem is
     unsolvable only for computers which possess an infinite memory.

I will not actually reproduce the math here; that would be rude to
the authors, at the very least, and copyright violation. I will,
however, summarize the method:

1. Enumerate all possible states of both CPU and memory. Forbid
any weird things outside of the scope of the Turing machine
formalism, like I/O (to things other than memory). Call the total
number of possible states N. We're talking about the Turing machine
formalism here, so the algorithm must be reducible to nothing but an
FSM and the memory the FSM acts on. Note that N is the number of
possible states of both the FSM *and* the memory, so it's (FSM
states) * ( (number of bits in RAM)^2 ).

2. Run the algorithm for at most N+1 steps. At every step, mark
the state off in your enumeration. Stop when:
    - You reach a state (of both the FSM and memory) that you've
      seen before. Since the Turing machine formalism is entirely
      deterministic, the algorithm is in an infinite loop through
      this state, and you're done: does not halt.

    - The algorithm halts. Guaranteed to happen with in N+1 steps,
      since N is all possible states.

    - The algorithm runs out of memory.

The authors note:

    The reader surely has noticed that the above proof is
    essentially identical to the proof of the Pumping Lemma for
    regular languages. Moreover, the same proof as above
    constitutes a part of the proof of a theorem in the reference
    [Davis-Sigal-Weyuker1994]. Therefore, the abovementioned proof
    is not novel in theoretical computer science.

Which was the point many of us have been trying to make: John Clark
clearly has no background in formal CS, or he would not have been
asserting obviously incorect (to someone with such a background)
things like "Yes, so if you have limitations even with infinite
memory you sure as hell are going to have limitations with a real
computer with finite memory." in response to "The undecidability of
the Halting Problem is predicated on infinite memory."

He did, correctly, assert that this method isn't usually useful;
the authors of this paper make the same comment:

    Unfortunately, N will ordinarily be such a huge number that
    this result is only theoretical, it does not lead to a practical
    algorithm. And of course the detection of CR returning to Sk
    requires such gigantic memory and running time capacity from the
    agent which observes CR that this algorithm cannot be
    practically implemented.

However, I disgree on that point. It is quite possible to break a
program into parts that can be reasoned about formally in this way,
and everything else (I/O, true concurrency, true randomness, etc);
Haskell does this more or less automatically, for example.

Given only the formalizable parts of the code, it is then possible
to break that down into component parts. The lowest level
components can be tested directly, and the components that hold
those together can be tested with stubs for the lowest level

At that point, it actually becomes pretty tractable; a single
functional (functional in the formal sense; anything that isn't
functional in the formal CS sense isn't reducible to a Turing
machine, so none of this works anyways) function might be a few
thousand machine language instructions, and might be reasonably
expected in non-erroring cases to only take up a few mebibyets of
memory, with an input measured probably in bytes. Certainly code
can be written in such a way to make that the case, albeit with some

A that point, we're talking about terabytes or less of possible
states, and you can buy drives that big these days.

It would be entirely reasonable for a machine mind to, as its first
action, decompose and test all its code in this fashion.

And, of course, none of this has anything to do whatsoever with goal
systems; we're talking about features of computation in general.
Having a non-fixed goal system, whatever that means, isn't some kind
of magic want that makes you immune to the halting problem, as John
seems to think. Choosing to get bored and give up is, from a formal
computational perspective, no different from the seperate thread I
suggested: they're both just methods of noticing that progress isn't
being made, and choosing to try something else.

A mind that responds to goals by always trying the same thing over
and over again, even if that leads to a pointless infinite loop,
which is what John seems to be worried about, isn't a "fixed goal
mind", it's a moron.


PS: I hope that gives those of you who thought I was simply berating
John out of turn a different perspective? Note that I've tried to
do this with him several times before, and it's never helped.

They say:  "The first AIs will be built by the military as weapons."
And I'm  thinking:  "Does it even occur to you to try for something
other  than  the default  outcome?"  See http://shrunklink.com/cdiz
http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT