From: J. Andrew Rogers (andrew@ceruleansystems.com)
Date: Sun Mar 13 2005 - 11:55:41 MST
On Mar 8, 2005, at 1:47 PM, Daniel Radetsky wrote:
> "Eliezer S. Yudkowsky" <sentience@pobox.com> wrote:
>> But the last item will be available, and it and other structural cues
>> are sufficient information (given sufficient computing power) to
>> deduce
>> that humans are fallible, quite possibly even that humans evolved by
>> natural selection.
>
> I don't see why you believe that there will be that much there to
> find, or that
> *any* AI would have to have the right kind of background knowledge to
> make that
> inference. Computing power is not a catch-all; you need facts too.
Yes, precisely, computing power buys very little direct intelligence.
The idea that an AI will be able to deduce all manner of behavioral
characteristics of humans in any kind of detail from a few trivial
samples and interactions is pretty anthropomorphic and essentially
ignores the vast amount of learned context and patterns that allows
humans to read the behaviors of other people. Humans aren't born with
this knowledge either, nor do we infer it in a vacuum. It takes humans
a long time to acquire enough useful samples and experience, and most
humans are drinking from a fire hose of real data every day.
No amount of navel-gazing will make an AI any smarter than it was a few
minutes prior, assuming any vaguely efficient design. Just because the
secret to all human behavior may exist in the digits of pi does not
imply that there is any more meaningful knowledge in pi than its
intrinsic information (which is damn little). Every argument that I've
ever seen claiming significant utility from AI navel-gazing and RSI has
simply moved where the vast body of learned knowledge is hidden to
somewhere else. Environmental data is not fungible and you need gobs
of it to have an internal model that is even loosely valid, and the
intelligence cannot exceed the total environmental information in the
system or make meaningful predictions outside its domain of
applicability. The amount of data required to usefully
reverse-engineer in part even most simple algorithmic systems vastly
exceeds the intrinsic complexity of the systems themselves.
This is the frequently glossed over issue of model starvation. You
cannot solve the problem of model starvation by churning on the same
bits over and over, as no new information (i.e. potential knowledge) is
added to the system. Nor can the system automagically obtain more
information than is intrinsic to what it has been exposed to.
Intelligence is powerful precisely because it is a reflection of its
environment and nothing more (except perhaps whatever simple biases
exist in its machinery). There is a bootstrap problem here; an AI too
ignorant to be a threat is too ignorant to become a threat on its own
without a lot of help from its environment, and computers live in
sensory deprivation chambers.
Could one design a system and environment that allows the AI to quickly
become adept at understanding and manipulating human behavior? Sure!
But the point is that this is not a feature of intelligence but of the
internal model built via the intelligence, and would require a vast
quantity of environmental data in support of achieving that model. A
sufficiently complex AI system with a rich environment may arrive at
that capability eventually on its own, but you'll have to wait a while.
It would be a trivial thing to build domain specific
super-intelligence via selective model starvation on the part of its
designers. Obviously, this would significantly limit some of the
theoretical utility of such a system. But the real point is that model
starvation is the default state for intelligent systems generally, and
that it is quite expensive to extend knowledge in any given direction,
something that the designers of the system can easily control; whether
they actually do or not is another issue.
The idea of a laboratory AGI very rapidly bootstrapping into a human
manipulating monster makes a *lot* of assumptions about its environment
that I would assert are not particularly realistic in most cases. One
could specifically create an environment where this is likely to
happen, but it won't be the likely environment even by accident. It
will be an eventual problem, but it probably won't be an immediate
problem.
(some other theoretical problems omitted)
j. andrew rogers
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:55 MST