From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Jun 26 2001 - 22:35:59 MDT
gabriel C wrote:
>
> >>I've thought of at least one plausible method an SI could use to affect
> >>our world from a total black box.
>
> If it can escape from the box through methods either clever or "magical",
> how can we call it a "total black box"? BTW, what is the substrate in the
> box?
Well, let's say a million 200GHz (clock-speed) FPGA chips so that there's
some realistic resemblance to "superintelligence", although what I have in
mind might also work on a regular PC if you could fit an SI onto one of
those. In this case, what I'm doing is thinking in terms of a relatively
simple test; I have thought of a way that a SI allegedly having no inputs
or outputs whatsoever can use to communicate with the outside world. It
is not unbeatable magic. It is easy to prevent if you think of it in
advance. But, as long as nobody here thinks of it, then they cannot be
sure of imprisoning Eliezer, much less an SI.
Trying to jail Eliezer, or any other smart human, is dangerous - but you
have a chance of succeeding. Unless you're overconfident. Then you're
screwed, even if you're just up against a smart human. I would be very,
very seriously on my guard if I wanted to put Carl Feynman into a black
box, and never mind an SI. The basic point I'm trying to make is that it
never pays to assume you have a creative thinker outgunned just because
you have what looks like a material advantage. Assuming you have an SI
outgunned is the height of hubris. I imagine a group of medieval warriors
persuading themselves "Hey, it's just one guy; no matter how good he is
with a sword, he can't beat an army," followed by the sound of machine
guns.
If only you'd talk about an <H or ~H AI, instead of an SI, this discussion
would make more sense...
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT