Re: ESSAY: Program length, Omega and Friendliness

From: Nick Hay (nickjhay@hotmail.com)
Date: Sat Feb 25 2006 - 16:10:49 MST


Philip Goetz wrote:
> On 2/21/06, William Pearson <wil.pearson@gmail.com> wrote:
>
>>Cribbing from Chaitin mercilessly....
>>
>>The number of friendliness preserving programs a program can safely
>>self transform itself into is bounded by the length of the program.
>>
>>Let us posit a number Phi which is equivalent to Omega but for
>>friendliness. That is bit 1 is 1 iff the shortest program expressible
>>in a programming languages (breaking ties in a defined fashion) is
>>friendly and has friendliness maintaining transformations. And so on
>>with the second bit and the second shortest program. And omega is bit
>>1 is 1 iff program 1 halts (I don't know whether this is the classical
>>definition)
>
> Omega is the sum, over all programs that halt, of 1/(length of program
> in bits), when programs are expressed in a way so that no program is a
> prefix of another program (necessary so that the sum of the lengths of
> all programs would be 1). What you've defined works only if, for
> every n, there is exactly 1 program of length n. Fortunately there
> are more than that.

I think you mean the sum over 1/2^{length of the program in bits} for all
halting programs, here. In this model the set of all halting programs is prefix
free to ensure this sum be less than or equal to 1 (that this is so is the
content of Kraft's theorem). Programs have to be self-delimiting if you want to
generate them simply through successive coin clips: the computer has to tell you
when to stop flipping the coin.

The modified definition of omega, with each bit describing whether a program
halts, is actually also used (well, I've done exercises about it anyway!).
Although one can compute each bit sequence from the other, modified omega
requires a lot less computation time to deal with than omega itself.

>>Knowing a bit of Phi is 1 is equivalent to knowing a bit of Omega is
>>1, unless not halting is friendly :)
>
> We are leaving halting out of it, I think. Actually, halting would be
> friendly, or at least neutral - a program that had halted couldn't goo
> you. Anyway, the point is, Phi is about friendliness, not halting;
> don't confuse the two.

Yah, I don't see the direct connection between Phi and Omega like this.
Although halting programs could still be dangerous, if they ran long enough.

One *can* generalise Omega from the probability a program halts (or
modified-omega: the bitsequence of halting programs) to any non-trivial property
of programs, for instance the Friendliness of a program's behaviour. Then, by
Rice's theorem (an analog of the halting theorem), these generalised omega's
have the same properities as the original (I think).

I don't see how this helps you, however.

> In any case, there is no 1-1 correspondence between bits of Phi, and
> programs. The cardinality of the set of bits of Phi is aleph-null;
> the cardinality of the number of programs is 2^(aleph-null). Well,
> technically you could construct a 1-1- correspondence, if you take the
> continuum hypothesis as false, but there's no obvious way to take a
> bit of Phi and find the program for it, or vice-versa.

It's true that there's no 1-1 correspondence between bits of Omega and whether
individual programs halt in the standard omega. However, you can compute one
from the other (basic idea: from omega you can compute the number of n-bit
programs that must halt, say k, from this number you can compute the actual set
of halting programs by running all n-bit programs until k of them halt). There
is a correspondence between bits of Phi and programs by its definition.

The cardinality of the set of programs (prefix-free or not) *is* aleph-null i.e.
the set of programs are countable. This is becase there are countably many
finite bitstrings. There are uncountably many infinite bitstrings, but we don't
use these as programs.

>>It would be less than that because you would have to take
>>into consideration the bit requirement of the machinery to transform
>>your bit of Phi into a program and other things you want your program
>>to do. But tightening the bound is not of much interest I don't think.
>>
>>Anything wrong with this semi-formal proof?
>
>
> Two key problems:
>
> 1. Each bit in Phi does not correspond to a program. If you want
> each bit to correspond to a program, you have to construct Phi in a
> different manner. If you want Phi to be a number (a sequence of
> bits), then you must be aware that claiming that you can make each bit
> in a sequence of bits correspond to one of the set of possible
> programs, is claiming that the continuum hypothesis is false.

This isn't a problem, as I've argued above...

> 2. This last bit about the maximum number of bits being bounded by
> your starting program. That doesn't seem to connect to anything else
> in the proof.

...but this is. More detail here, please.

-- Nick Hay

>
> - Phil
>
> .
>



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT