From: Michael Wilson (mwdestinystar@yahoo.co.uk)
Date: Wed Jun 02 2004 - 08:14:20 MDT
> HYPOTHESIS: COMPUTATIONAL DEPTH OF INTELLIGENT SYSTEMS (CDIS)
> It's not possible to have a system that's capable of a high level of
> intelligence and creativity, that is entirely predictable in its
> behavior
Firstly, we're trying to predict what general class the behaviour will
stay within, not exactly what it will do. Secondly, you need to pin
these terms down if you want to use them in formal hypotheses.
> In computation-theory terms, this hypothesis claims that intelligent
> systems are programs with a large "computational depth" (a notion
> introduced by Charles Bennett), meaning that there is no way to simulate
> their results, given their descriptions, without using a large amount of
> run-time (in the sense of a large number of computational steps).
Again we're trying to predict what constraints there will be on the
output, not what the output will actually be. This is done via
propagation of constraints through intermediate functions and data
structures (standard formal methods stuff really).
> Of course, you are welcome to believe that the CDIS hypothesis
> is false. It's just a hypothesis, I haven't proved it.
I think it's valid with respect to a detailed prediction of
behaviour and false for a general one.
> A relatively "safe" self-modifying AGI design is one that is
> predictable with a relatively high (but not absolute) degree
> of certainty.
Unfortunately this is wrong. Self-improving intelligence is a
nonlinear process. You either constrain it within a desireable
space, or you don't. Attempting to estimate probabilities for
staying in desireable space by good luck and poorly understood
corrective mechanisms is an enterprise doomed to failure.
> I suspect there is some kind of INTELLIGENCE INDETERMINACY
> PRINCIPLE (IIP) of the rough form.
This is a neat summary of the basic pathology behind the
probabilistic approach to goal system design and
self-modification. If this principle existed as stated, the
SIAI project would be in serious trouble. Fortunately
intelligence is the power to remove uncertainty, not increase
it.
> [FAI] doesn't require that the self-modification of an AI system
> be tractably deterministic... it merely requires that certain
> probabilities about the future evolution of the system be
> estimable with a high degree of confidence... it argues for the
> shutting-down of some dead-end paths toward Friendly AI, such as
> the path that seeks an AI whose overall behavior is in some sense
> predictable.
I don't understand what you're saying here. Probabilities of the
system exhibiting given behaviour classes being estimable seems
like 'overall behaviour that is in some sense predictable' to me.
Without a resolution of that contradiction I can't dispose of the
rest of your argument. Clearly the self-modification has to be
tractable and deterministic for the AI to do it in the first place,
but we want to know what will happen without running a takeoff.
There is no way to do this probabilisticaly; fortunately it is
possible to design architectures and goal contents that will
provably stay in some useful area of utility-function-space.
* Michael Wilson
.
Send instant messages to your online friends http://uk.messenger.yahoo.com
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:38 MST