Indeterminacy and Intelligence

From: Ben Goertzel (ben@goertzel.org)
Date: Sat May 29 2004 - 06:16:40 MDT


> > I suspect that for any adequately intelligent system there is some
> > nonzero possibility of the system reaching ANY POSSIBLE POINT
>
> This had been said many times before by wiser heads than me,
> but once again 'probabilistic self-modification is bad'. It
> took meembarrassinglyy long to get this too, but it was
> obvious in retrospect. Of course sufficiently implausible
> hardware and/or software failure can cause any design to fail
> in implementation, but that risk class is very low in sane designs.

Well, this issue is somewhat subtle, and gets into some sketchy theory
I've developed but never written up as it's not concrete enough yet.

Firstly, I'll posit:

HYPOTHESIS: COMPUTATIONAL DEPTH OF INTELLIGENT SYSTEMS (CDIS)
It's not possible to have a system that's capable of a high level of
intelligence and creativity, that is entirely predictable in its
behavior (unless that prediction is done via invoking a truly
unrealistic amount of computing power to do the prediction via basically
simulating the system and environment at a faster rate).

In computation-theory terms, this hypothesis claims that intelligent
systems are programs with a large "computational depth" (a notion
introduced by Charles Bennett), meaning that there is no way to simulate
their results, given their descriptions, without using a large amount of
run-time (in the sense of a large number of computational steps).

Now to validate or refute this hypothesis, would require a kind of
science and mathematics that is not remotely near available right now.

Of course, you are welcome to believe that the CDIS hypothesis is false.
It's just a hypothesis, I haven't proved it.

Fortunately, CDIS does NOT emply that intelligent systems are not
probabilistically predictable. It just implies that they're not exactly
predictable.

A relatively "safe" self-modifying AGI design is one that is predictable
with a relatively high (but not absolute) degree of certainty. Now, how
MUCH certainty can be found for any intelligent system is the next
question. I suspect there is some kind of INTELLIGENCE INDETERMINACY
PRINCIPLE (IIP) of the rough form

degree_of_predictability < f( intelligence)

where f is a decreasing function (which however may start to decrease
more slowly for large degrees of intelligence).

[Yeah, of course it's probably not this simple, there are probably other
variables involved too; this is just a "heuristic equation" expressing
an intuitive idea]

We then get to the interesting part. For each intelligent system, there
are some things that will be more predictable than others. The Friendly
AI problem requires creating an intelligent systems that is maximally
predictable in the particular domain of ethics, although it may be
unpredictable in other domains. It doesn't require that the
self-modification of an AI system be tractably deterministic -- which is
fortunate, since that would contradict CDIS and IIP. It merely requires
that certain probabilities about the future evolution of the system be
estimable with a high degree of confidence (e.g. the probability that it
kills us all).

All this doesn't get us any further toward creating a Friendly AI --
except that it argues for the shutting-down of some dead-end paths
toward Friendly AI, such as the path that seeks an AI whose overall
behavior is in some sense predictable.

In short I don't believe that "probabilistic self-modification is bad."
I think that probabilistic self-modification leads to systems with high
computational depth, but that all intelligent systems are going to have
high computational depth anyway.

The challenge of FAI is to create a system that displays the uncertainty
and unpredictability characteristic of intelligence, yet is highly
predictable in the domain of ethics.

-- Ben G



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT