Re: ESSAY: Program length, Omega and Friendliness

From: Ben Goertzel (ben@goertzel.org)
Date: Wed Feb 22 2006 - 10:16:11 MST


Eliezer,

> > On the other hand, if the starting program is allowed to increase its
> > algorithmic information over time via adding new physical compute
> > resources, then things get more complicated -- but this doesn't get
> > around the basic problem and obvious I've cited above.
>
> *blink blink*
>
> Ben, adding new physical compute resources, without absorbing new bits
> from the environment, doesn't change the algorithmic information at all.
> A webcam increases your algorithmic complexity. Adding 10^36 CPUs
> doesn't change it at all. The formalism assumes a Turing machine with
> unbounded tape.

Sorry, I guess I didn't phrase things precisely enough.

There are a lot of ways to formalize the intuitive argument I
presented before, here I will try out one of them...

Here is the computational framework I want to consider right now...

I assume that we have a physical computer with a fixed amount M of
memory at its disposal. In current computer-architecture terms we may
consider this to be the amount of RAM it has, or if we want to be
really generous the amount of RAM plus the amount of hard disk space.
In this approach there is no infinite tape.

I then argue that this computer is going to have trouble doing proofs
about programs P with algorithmic information > M, because these
programs P cannot be computed by any program that fits in M.

Hooking a web-cam up to the computer doesn't help with this problem.

If the computer gets more memory in the future, then it can run
programs P with algorithmic information > M [of course, not all its
programs need to have algorithmic information > M, but some of them
may well.] There is no way that any program running on the initial
version of the computer will be able to prove theorems about these
programs running on future versions of the computer.

Your point about the webcam is well taken, because in order for the
future version of the computer with extra memory to *get* a program
with algorithmic information > M, it needs to get extra information
from somewhere besides the computer's initial program (which
necessarily had algorithmic information <M). This extra information
may come from the external world, through a webcam or other sensory
organs.

> So does this change, at all, the long argument you delivered about
> recursive self-improvement being impossible?

Hey, I definitely did not claim that recursive self-improvement is impossible!

What I suggested is that it is impossible for a program/computer
combination to recursively self-improve its hardware and software in
such a way that it can both

a) increase dramatically the algorithmic information of its software

and

b) prove, based on its initial hardware configuration (or any minor
variation with a comparable amount of algorithmic information), that
its far-future improved versions (with substantially greater
algorithmic information) will be Friendly.

Recursive self-improvement is definitely possible -- it's the
provability of properties of future versions of oneself with massively
more algorithmic information than one's current self that poses
difficulties....

The only way around this problem seems to be to control recursive
self-improvement in such a way that the algorithmic information
content of future revisions is bounded by M. This is not necessarily
unworkable but it may prove overly restrictive.

For example, the AIXI and AIXItl and Godel Machine AGI architectures
are extremely powerful (if one sets aside computational efficiency
considerations), but have very low algorithmic information (they can
be coded very briefly). On the other hand, they are way too slow to
ever be useful, and it may well be that, in the spectrum of
pragmatically useful AGI architectures, there are ones that have
algorithmic information massively more than any M we humans can build
into our machines in the near future, and that are massively superior
in intelligence and ethicalness to anything with algorithmic
information < this "humanly achievable M."

Also, the mere absorption of large amounts of data from the external
world may boost the algorithmic information of future AI's above the
level of humanly achievable limits. To understand the world, a future
AI may wish to store a lot of data about the world in its active
memory in order to mine this data, and the physical world certainly
contains interesting data with algorithmic information exceeding
humanly achievable limits.

A more exact and careful version of the above arguments would have to
consider run-time as well as program length. I have finessed this
issue in part by assuming the computer has a fixed finite amount of
memory M, which in effect means that this is all the memory it can
access within a feasible amount of time (so that the runtime
complexity of programs run on it using memory beyond the limit M would
be infeasibly excessive). But I am quite confident that even if you
consider runtime in detail, the same sort of argument will hold....

What this means to me, qualitatively, is that the criterion of
provability is too stringent in the face of massive ongoing
hardware/software self-improvement.

-- Ben



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT