SI as puppet master

From: James Higgins (
Date: Fri Jun 22 2001 - 03:01:10 MDT

There has been discussion lately about how one could talk to a confined SI
safely. Further, the theory that an SI could control virtually any person
though just conversation over a VT100 for some duration has been put
forward. A number of people have voiced skepticism about this. Thanks to
my trusty shower I believe I know why this is true, and how to prevent it.

Let's setup the scenario first. An SI has been created and is running, but
is isolated from any interaction with the world. For arguments sake lets
say it can think many thousands of times faster than a human, has a huge &
nearly perfect memory and is at least a couple of magnitudes smarter than a
human. So think of any conversation it would have with a human from its
point of view, it would be horribly bored. It would be years between each
word uttered (or even worse, typed) by the human. The delay would be so
noticeable that the SI would almost certainly start a background thread to
predict what the person would say in response to the SIs replies. This
information would help the SI conduct a more efficient conversation since
irrelevant exchanges and diversions could be predicted and avoided. After
the SI had conversed with the person long enough, and on a large quantity
of topics, the SI could virtually emulate the human to a high degree of
accuracy. If you don't think this is feasible, think Big Blue. Experts
were convinced that a computer could never beat the world chess champions,
which has now been done. Given sufficient knowledge, time and processing
power it should be possible for an SI to predict the behavior of a human
very accurately.

Ok, now if the SI is not friendly or even if it is but has an agenda, it is
straight forward from this point to pull the human's strings and run them
just like a puppet. Just run a few hundred million simulated conversations
and determine which ones produce the desired reaction. It might take a
long series of exchanges to produce the desired result, but even
calculating that would only take a few moments. So, now you are nothing
but a helpless puppet on the other side of a VT100 to this SI, and you
don't even have a clue.

I think that sums up a reasonable argument on how an SI could accomplish
this. So, how do we prevent this from happening? The answer is quite
simple actually: We slow it down. Any SI that is not fully trusted should
be resource starved. So that once it actually hits SI the performance is
such that it would take 20+ minutes (our time) for it to respond to a
single question. This is good, it gives us less intelligent people more
time to consider our side of the conversation, and it less. At this pace
it should be impossible for the SI to control a human since it would not
have the processing time necessary to work this out while holding a
conversation. Any significant background task running should significantly
slow the conversation making it obvious to the human participants. I think
this is as safe as it gets for conversing with any unknown SI.

James Higgins

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT