Re: SI Jail

From: James Higgins (jameshiggins@earthlink.net)
Date: Sat Jun 30 2001 - 12:56:10 MDT


I don't think an SI running in a highly controlled & secure environment
could do quite as much as some people think. The SI can only do what the
hardware it is running on will allow. However, that may be much, much more
than we expect.

Electronics give off EM. It would be extremely difficult, but an SI might
be able to time its execution to produce a deliberate signal using this
technique. Also, at the very least there will always be at least one
external connection to the computer, the power cable. While I suspect it
is highly unlikely that an SI could manage to perform I/O using the
built-in power supply it may in fact be possible. We must assume that an
SI will be able to control all aspects of the hardware to their
utmost. For this same reason any hardware an SI is to run on should be
kept to a minimum, sound cards & fancy video cards should probably be
avoided along with anything else non-essential. Further, the room the SI
is to be run it should be highly secure (locked & hopefully guarded), be
completely EM shielded and the AC power feed should be run both though a
conditioner and an on-line UPS just prior to connecting to the PC.

There are only 2 ways that I imagine that SI could ever perform "magic"
given these limited resources. First, if it is possible to create
something that we do not understand by putting a bunch of electrons in
specific patterns and the SI can figure out how to get said electrons into
that configuration within the hardware someplace. This "seems" unlikely
especially given that the SI is highly unlikely to know details (circuit
designs, etc) about its own hardware unless its operators are stupid. The
other method would be true "magic" to us. In Great Bear's "Blood Music"
the laws of science are actually created by conscious thought. So if a
huge number of minds agreed that something was true (like FTL travel) then
it would be so. The amount of actual intelligence involved is more
important in his universe than the actual number of individual
minds. Thus, if anything like this were true we would be SOL no matter
what we do, period.

Thus I think the prospects of keeping an SI in a highly secure, isolated
environment is very good. At least until you deliberately hook up I/O
feeds to the SI. At some point someone is going to have to interact with
it, at which point all bets are off. And, of course, if the SI is ever
given absolutely any access to the Internet or other networks then you
loose all control as well.

At 12:35 AM 6/27/2001 -0400, you wrote:
>gabriel C wrote:
> >
> > >>I've thought of at least one plausible method an SI could use to affect
> > >>our world from a total black box.
> >
> > If it can escape from the box through methods either clever or "magical",
> > how can we call it a "total black box"? BTW, what is the substrate in the
> > box?
>
>Well, let's say a million 200GHz (clock-speed) FPGA chips so that there's
>some realistic resemblance to "superintelligence", although what I have in
>mind might also work on a regular PC if you could fit an SI onto one of
>those. In this case, what I'm doing is thinking in terms of a relatively
>simple test; I have thought of a way that a SI allegedly having no inputs
>or outputs whatsoever can use to communicate with the outside world. It
>is not unbeatable magic. It is easy to prevent if you think of it in
>advance. But, as long as nobody here thinks of it, then they cannot be
>sure of imprisoning Eliezer, much less an SI.
>
>Trying to jail Eliezer, or any other smart human, is dangerous - but you
>have a chance of succeeding. Unless you're overconfident. Then you're
>screwed, even if you're just up against a smart human. I would be very,
>very seriously on my guard if I wanted to put Carl Feynman into a black
>box, and never mind an SI. The basic point I'm trying to make is that it
>never pays to assume you have a creative thinker outgunned just because
>you have what looks like a material advantage. Assuming you have an SI
>outgunned is the height of hubris. I imagine a group of medieval warriors
>persuading themselves "Hey, it's just one guy; no matter how good he is
>with a sword, he can't beat an army," followed by the sound of machine
>guns.
>
>If only you'd talk about an <H or ~H AI, instead of an SI, this discussion
>would make more sense...
>
>-- -- -- -- --
>Eliezer S. Yudkowsky http://intelligence.org/
>Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT