From: Charles D Hixson (email@example.com)
Date: Tue Feb 21 2006 - 21:15:54 MST
On Tuesday 21 February 2006 06:25 pm, William Pearson wrote:
> Cribbing from Chaitin mercilessly....
> The number of friendliness preserving programs a program can safely
> self transform itself into is bounded by the length of the program.
> Let us posit a number Phi which is equivalent to Omega but for
> friendliness. That is bit 1 is 1 iff the shortest program expressible
> in a programming languages (breaking ties in a defined fashion) is
> friendly and has friendliness maintaining transformations. And so on
> with the second bit and the second shortest program. And omega is bit
> 1 is 1 iff program 1 halts (I don't know whether this is the classical
> Knowing a bit of Phi is 1 is equivalent to knowing a bit of Omega is
> 1, unless not halting is friendly :)
> The easiest way to know whether n programs halt (without empirically
> checking them) is to have them encoded in an n bit number. Each bit
> representing one bit of omega, so similarly for Phi. Quite often this
> is expressed as knowing the first n bits of omega, but as we are not
> interested in the ordering of Phi we shall treat them as arbitrary
> bits of Phi and hence arbitrary length* programs.
> So the maximum number of bits of Phi and hence number of known or
> provably friendly programs is bounded by the length of your starting
> program. It would be less than that because you would have to take
> into consideration the bit requirement of the machinery to transform
> your bit of Phi into a program and other things you want your program
> to do. But tightening the bound is not of much interest I don't think.
> Anything wrong with this semi-formal proof?
> I hope I haven't covered ground already well trodden, I couldn't find
> anything explicitly positing an equivalent of Omega for the
> friendliness quality and not much on Omega and Chaitin together apart
> from some stuff by Geddes and a very early one by Mitchell Porter.
> Will Pearson
> * But not arbitrary kolmogorov complexity programs, by definition.
> This is a rejoin. I am going to try and not argue with people about
> their approaches to AI or hard take off and so not flog the poor
> deceased equines. I will limit myself strictly to self-improvement
> theory and intend to automatically ignore a large percentage of list
> My own work isn't AGI it has evolved into defining a parallel hardware
> system that can be used as a metaphor for the system like the brain
> including equivalents to neurovascular coupling and the like. As such
> I don't see my hardware likely capable of hard take off (quite apart
> from my theoretical dislike for it) as I expect if or when human level
> software is implemented on it it, decades from now or possibly never,
> it would only be able to modify its own software and goal system in
> the same way humans can i.e locally and not totally.
Well, that definitely isn't a constructive proof, but even worse you can
substitute practically any undefined term for friendly in that "proof" and
have something equivalently meaningful. And I'm not certain the term needs
to be undefined, except in context.
The number of parity preserving programs a program can safely
self transform itself into is bounded by the length of the program.
Let us posit a number Phi which is equivalent to Omega but for
parity. That is bit 1 is 1 iff the shortest program expressible
in a programming languages (breaking ties in a defined fashion) is
even and has parity maintaining transformations. And so on
with the second bit and the second shortest program. And omega is bit
1 is 1 iff program 1 halts (I don't know whether this is the classical
Knowing a bit of Phi is 1 is equivalent to knowing a bit of Omega is
1, unless not halting is even.
You could probably also substitute "stupidity" or "virtue" (both of with have
approximate definitions), but for parity it's true by inspection, so we can
see what the argument means.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT