Re: Model Starvation and AI Navel-Gazing

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Mar 15 2005 - 18:29:37 MST


J. Andrew Rogers wrote:
>
> No amount of navel-gazing will make an AI any smarter than it was a
> few minutes prior, assuming any vaguely efficient design. Just
> because the secret to all human behavior may exist in the digits of
> pi does not imply that there is any more meaningful knowledge in pi
> than its intrinsic information (which is damn little). Every
> argument that I've ever seen claiming significant utility from AI
> navel-gazing and RSI has simply moved where the vast body of learned
> knowledge is hidden to somewhere else. Environmental data is not
> fungible and you need gobs of it to have an internal model that is
> even loosely valid, and the intelligence cannot exceed the total
> environmental information in the system or make meaningful
> predictions outside its domain of applicability. The amount of data
> required to usefully reverse-engineer in part even most simple
> algorithmic systems vastly exceeds the intrinsic complexity of the
> systems themselves.
>
> This is the frequently glossed over issue of model starvation. You
> cannot solve the problem of model starvation by churning on the same
> bits over and over, as no new information (i.e. potential knowledge)
> is added to the system. Nor can the system automagically obtain more
> information than is intrinsic to what it has been exposed to.
> Intelligence is powerful precisely because it is a reflection of its
> environment and nothing more (except perhaps whatever simple biases
> exist in its machinery). There is a bootstrap problem here; an AI
> too ignorant to be a threat is too ignorant to become a threat on its
> own without a lot of help from its environment, and computers live
> in sensory deprivation chambers.
>
> Could one design a system and environment that allows the AI to
> quickly become adept at understanding and manipulating human
> behavior? Sure! But the point is that this is not a feature of
> intelligence but of the internal model built via the intelligence,
> and would require a vast quantity of environmental data in support of
> achieving that model. A sufficiently complex AI system with a rich
> environment may arrive at that capability eventually on its own, but
> you'll have to wait a while. It would be a trivial thing to build
> domain specific super-intelligence via selective model starvation on
> the part of its designers. Obviously, this would significantly limit
> some of the theoretical utility of such a system. But the real
> point is that model starvation is the default state for intelligent
> systems generally, and that it is quite expensive to extend knowledge
> in any given direction, something that the designers of the system
> can easily control; whether they actually do or not is another issue.

James,

Do you believe this limitation on intelligence holds true for the
infinite-computing power version of AIXI? That is, do you think it's a
hard information-theoretical limit, rather than an inefficiency of
bounded computing power?

Also, would you care to quantify the minimum environmental information
required to produce a model capable of manipulating a human? To a guess
if you cannot calculate.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:55 MST