From: Thomas Buckner (tcbevolver@yahoo.com)
Date: Sun Mar 13 2005 - 17:28:40 MST
--- "J. Andrew Rogers"
<andrew@ceruleansystems.com> wrote:
> The idea that an AI will be able to deduce all
> manner of behavioral
> characteristics of humans in any kind of detail
> from a few trivial
> samples and interactions is pretty
> anthropomorphic and essentially
> ignores the vast amount of learned context and
> patterns that allows
> humans to read the behaviors of other people.
> Humans aren't born with
> this knowledge either, nor do we infer it in a
> vacuum. It takes humans
> a long time to acquire enough useful samples
> and experience, and most
> humans are drinking from a fire hose of real
> data every day.
>
> No amount of navel-gazing will make an AI any
> smarter than it was a few
> minutes prior, assuming any vaguely efficient
> design. Just because the
> secret to all human behavior may exist in the
> digits of pi does not
> imply that there is any more meaningful
> knowledge in pi than its
> intrinsic information (which is damn little).
> Every argument that I've
> ever seen claiming significant utility from AI
> navel-gazing and RSI has
> simply moved where the vast body of learned
> knowledge is hidden to
> somewhere else. Environmental data is not
> fungible and you need gobs
> of it to have an internal model that is even
> loosely valid, and the
> intelligence cannot exceed the total
> environmental information in the
> system or make meaningful predictions outside
> its domain of
> applicability. The amount of data required to
> usefully
> reverse-engineer in part even most simple
> algorithmic systems vastly
> exceeds the intrinsic complexity of the systems
> themselves.
I agree with what you say here except that the
hypothetical about 'everything being encoded in
pi' is a statement about how complexity can be
generated from a simple prior, and not about how
you can hope to get information about that
complexity. It's the difference between splashing
paint on a canvas to generate a complex pattern
(easy) and predicting beforehand the exact
pattern of the splash (impossible, or at least
far beyond our power).
As to the difficulty of understanding human
behavior, an AI may do a better job than we do
when it gets a good enough model. For example,
our behavior seems illogical on many levels, but
results from logical, if complex, processes and
needs. A good understanding of underlying needs
provides a shortcut to real motivations and
allows an observer to ignore much that is
irrelevant. For example, if we know a con man is
trying to get us to give him personal
information, we confidently conclude (without any
other evidence) that he is not really a Nigerian
government minister.
Another advantage an AI might have is an ability
to detect human deception from direct physical
cues. I can cite perfectly reasonable examples
from at least three science fiction films in
which a robot or computer AI does this, by
reading lips, detecting chemical signals, and
observing pupil dilation; so can a small number
of talented human observers (in an episode of
Deadwood, Al Swearengen asserts that a lying man
'smells like cat piss').
snip
> The idea of a laboratory AGI very rapidly
> bootstrapping into a human
> manipulating monster makes a *lot* of
> assumptions about its environment
> that I would assert are not particularly
> realistic in most cases. One
> could specifically create an environment where
> this is likely to
> happen, but it won't be the likely environment
> even by accident. It
> will be an eventual problem, but it probably
> won't be an immediate
> problem.
Let me take this opportunity to revive (for a
moment) the ghost of Sergeant Schulz. I know that
nobody else here considers the stratagem of
'human-engineering the stupidest guard' to be of
interest to a sandboxed AI, on the grounds that a
SAI could gull any human it chose. However, there
is a factor which may not have been mentioned,
which is time. Even an AI that can think 10K
times faster than any human must still deal with
humans on their time scales, not ver own. Ve may
be able to process the information gleaned from a
half-hour conversation in a tenth of a second,
but ve still has to spend half an hour talking to
you to get that information, and impart ver own
views to you. Ve may conceive a perfect strategy
in two seconds flat which still requires six
weeks of patient (even sphexish) cultivation
before the human interlocutor is sold on
'pressing the button' that opens the sandbox. If
there's a Schulz on site, and the SAI concludes
that Schulz will open the sandbox in only three
weeks, ve would need a very good reason *not* to
go to work on Schulz.
Tom Buckner
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT