Re: ESSAY: Program length, Omega and Friendliness

From: William Pearson (wil.pearson@gmail.com)
Date: Tue Feb 28 2006 - 05:40:50 MST


On 28/02/06, Philip Goetz <philgoetz@gmail.com> wrote:
> On 2/27/06, William Pearson <wil.pearson@gmail.com> wrote:
> > On 27/02/06, Philip Goetz <philgoetz@gmail.com> wrote:
>
> > > Determining friendiness may be as hard as determining whether a
> > > program halts. I don't know. Why would friendliness imply that the
> > > program halts? I don't see any connection.
> >
> > No? If a program doesn't halt it doesn't output (in the classical
> > formulation of halting). If it doesn't output it doesn't do anything.
> > Most definitions of friendliness definitely imply that programs are
> > supposed to do something. This may be non-obvious, because as Richard
> > Loosemore we tend to think of AI programs as being infinite
> > outputting, so it might be clearer if I talked about whether a program
> > hangs or not rather than halts. However this discussion with you is
> > consuming the time that I can devote to that endeavour.
>
> This is an interesting way of interpreting the formalism.
> However, think of the application. Any competent AI, friendly or not,
> will be a non-halting program. If you consider only halting programs
> because of how you define "output", you have defined away the
> interesting part of the problem. You can't say that you're going to
> somehow reify all of the outputs produced by an AI over time
> interacting with its environment into one single output, and then make
> it halt.

The approach I am going to take would be change the problem into an
equivalent non-hanging problem.

while (1)
{
  output = performFunction(input)
}

Where output and input are variables connected to effectors and effectors.

If performFunction halts on all inputs then the system never hangs.
So if we can prove that the system will never hang then we can prove
that performFunction will never halt.

Veronica Becher has apparently done some work towards this sort of
problem but I can't find a reference.

> I don't know how to address the problem of applying this sort of
> analysis to programs that accept input from the environment as they
> run, particularly input whose values are conditional on the prior acts
> (outputs) of the program.

The method I outlined above can cope with this but can't cope with
programs that store previous inputs. Proof systems don't tend to have
inputs, or if they do they are generally considered oracles for
uncomputable values so this may or may not be a problem.

> > > > As all the bits of the Omega Series have no relation to each other,
> > > > they are random so the shortest way of storing them is the direct
> > > > information. And so because it is as hard or harder to prove
> > > > friendliness Phi must be equally random.
> > > >
> > > > A proof system and axioms is a way of storing information about the
> > > > world, we shall call a starting program with embedded proof system and
> > > > axioms S.
> > > >
> > > > How many other programs can S prove the friendliness of?
> > > >
> > > > I have said the bound on the number of programs a proof system can
> > > > have knowledge that they are friendly is the number of bits in it.
> > > > Otherwise it has found some hidden relation between the bits of the
> > > > Phi so that it can be compressed.
> > >
> > > Well, you're assuming, in advance, that there is no such thing as a
> > > proof system, but that instead, all the bits of Phi must be
> > > enumerated. A proof system is generative, and a proof system of
> > > finite size can prove, for example, that an infinite number of numbers
> > > are either prime or not prime.
> >
> > This has no bearing on determining properties of strings that are
> > interpreted as programs. Primeness of a finite string can be
> > determined in finite time by a finite algorithm. Halting, and I argue
> > friendliness, cannot be. I would advise you to read some Chaitin if
> > you haven't.
>
> Yes it does. As I understand it, you have essentially said that a
> program of length S can tell you only whether S different programs
> halt or not, because there's no way of testing for halting, so really
> the "program" is just a lookup table. Hence, a program of S can also
> tell you only whether S different programs halt or not.
>
> It seems that what you're saying boils down to that Friendliness is undecidable.

Did I not say that when I said that determining friendliness is as
hard or harder than determining whether a program halts?

> Can you restate what exactly you're trying to prove?

I am attempting to find the limits of what a formal proof system can
transform itself into whilst trying to maintain an undecidable
property. In this case friendliness. I am interested in things that
change the system in meaningful ways, hence why transforming itself
into an infinite number of systems that do the same thing is
uninteresting.

What I have said previously is my first attempt at doing so.

What I am saying has nothing to do with systems that experiment such
as GP or OOPS however GP and OOPS do not completely self-modify, in
the sense of a SeedAI, but it would be relevant to the Goedel Machine.

> What is known about probabilistic approaches to undecidable problems?
> Is it possible to probabilistically solve the halting problem - that
> is, let us say, to be able to create a program that estimates whether
> any given program halts, where a measure of error per guess of its
> predictions can be made arbitrarily low?

Not that I know of. These are my thoughts on the subject.

The Probabilistic approach gains you little. If you start
experimenting then you are evaluating machines and not strings and so
undecidability is not the same. Without experimentation there is no
way to update your prior. So only the amount of information your prior
held would be relevant.

With experimentation, by definition the bits of omega or equivalent
number are random, so knowing one bit will have no bearing on the
others so I wouldn't expect any improvement. Unless of course you
expect improvement on predicting which face of a die the next roll
would give, given the previous rolls. You can get better at knowing
the probability an arbitrary program will halt or not, by getting
better approximations of Omega (the number this time). But this
doesn't help much because this doesn't give you any better information
on which program to self-modify to.

AIXI which has the sort of bounds on error you want, only works on
computable sequences.

 Will Pearson



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT