Re: does complexity tell us that there are probably exploits?

From: Thomas Buckner (
Date: Mon Aug 22 2005 - 19:04:26 MDT

--- Daniel Radetsky <> wrote:

> Here's the problem as I see it: I claim that a
> world which contains exploits
> is about as complex as a world which does not
> (or, There are two possible
> worlds w1 and w2 such that both are empirically
> equivalent to the actual world,
> and w1 contains exploits, w2 does not, and
> K(w1) = K(w2)) (What is the symbol
> for "approximately equal to" in text?).

I've read somewhere (and can't seem to source it)
that our universe seems to be processing about
the maximum possible information already (this
finding having something to do with Bekenstein
and black holes, IIRC; but the following is an
interesting PDF which turned up:
My point, anyway, is: exploits or no exploits,
our universe appears maxed out (under the rules
as we presently understand them) in terms of
complexity. As Seth Lloyd points out here:
"if you want to know when Moore's Law, this
fantastic exponential doubling of the power of
computers every couple of years, must end, it
would have to be before every single piece of
energy and matter in the universe is used to
perform a computation. Actually, just to
telegraph the answer, Moore's Law has to end in
about 600 years, without doubt."
An exploit, by definition, would mean any truly
unexpected trick using physics we don't already
know, using only the known technology and
matter/energy available to the sandboxed AI while
it is still not all that big; has anyone
suggested or considered that Bekenstein's
information bounds are somehow not all there is?
That in addition to 'dark matter' and 'dark
energy' there might be 'dark complexity' or 'dark
information' we simply don't know about yet?

> Suppose
> we were to engineer humans
> which, for whatever reason, could not be
> mind-controlled by UFAI. Now we want
> to decide whether or not we should box the AI,
> recognizing that if there are
> exploits, we're screwed.

Or unscrewed? If a SAI can find exploits through
'dark information' it might not bother with us at
all, but simply escape into computationally
roomier spaces we can't access. If instead of the
seed AI going poof, researchers simply get
consistent massive crashes at some point of
complexity, for no obvious reason at all, every
time they expect a takeoff, this scenario might
be worth further discussion.

Tom Buckner

Start your day with Yahoo! - make it your home page

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:52 MDT