Re: large search spaces don't mean magic

From: Russell Wallace (russell.wallace@gmail.com)
Date: Fri Aug 05 2005 - 11:02:50 MDT


On 8/5/05, D. Alex <adsl7iie@tpg.com.au> wrote:
> I will restate what I am arguing for: "AI Boxing" has not, in my opinion,
> been shown to be unfeasible, and the arguments for its unfeasibility are
> weak. The "escape by itself" arguments rely on either a gross contravention
> of rules of physics as we understand them (not in the way that Relativity
> contravenes Newton, more akin to perpetual motion devices proposed to date)
> or assume incompetence (not incomplete understanding, but going against the
> rules type of incompetence) by AI Box designers. The "persuade the jailer"
> arguments in the end require the "jailer" to choose a clearly suboptimal
> outcome, and what the motivation for that would be is never made clear. And
> Yudkowsky's supposed AI Box experiment, in my opinion, just undermines the
> credibility of everyone involved.

I will argue that it has, but from a different viewpoint.

I agree with you that the escape arguments given thus far are weak;
for the "man versus dogs" analogy to be even vaguely applicable, you'd
have to be talking about a naked man with no tools in a hermetically
sealed cell, in which case the dogs are right that he really can't
escape no matter how smart he is, no matter how many ways he can think
of to escape _if_ he had the means.

The problem is that you can't have an AI in a box in the first place.

For a start, a superintelligent being isn't going to run on a single
machine short of full nanotechnology. You'll have to have a _lot_ of
computers, connected by the highest speed network you can get, at
which point it isn't in a box anymore. And that's just the requirement
to _run_ a full-fledged AI, let alone _create_ one (which will require
far more resources).

And then, nobody is actually going to put in all that work just to
spend two hours talking to the thing over an IRC link before turning
it off and reformatting the disks. To get any use out of it there'll
have to be rich two-way traffic, where you provide it with information
about the world and make use of advice and information it gives you. I
think we can all agree that in that situation, a superintelligent
entity could find ways to shaft you if it wanted to.

So I will argue that there are no plausible assumptions in which the
strategy of "create a superintelligent AI and keep it in a box" is
safe _and_ useful _and_ feasible. Therefore, however confident you are
of your ability to keep a box sealed, it doesn't make sense to set out
to create a superintelligent AI unless you have a plan for making sure
it will be Friendly.

- Russell



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT