Security overkill

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat May 17 2003 - 21:45:07 MDT


Gary Miller wrote:
> My proposed solution to friendliness problem.
>
> Note some of you will laugh this off as overkill. But believe me
> having worked as a consultant for the government for a number of years,
> this is just business as usual for NSA. It is a very expensive but
> very secure development process. It is based upon separation and
> balance of power. No one person has the access and knowledge to
> compromise the system. Relationships between team members must be
> prohibited to prevent possibility of collusion.

Overkill? No, I don't think it's overkill. I don't think there's any
such thing as overkill when it comes to certain problems. And even if it
were, what's wrong with overkill?

Note one thing, however: Safety is very, very expensive.

For example, I would very much like to have guaranteed frame-by-frame
reproducibility of AI development. You develop the AI for a week,
recording and timestamping all outside inputs. Then, when the week is
over, you take a snapshot. Then, on a separate computer, you take last
week's snapshot and run it forward, using the timestamped input. If the
final result doesn't match this week's snapshot, start over again from
last week. This guarantees that each and every frame of the AI's
existence is available to inspection, even five years later, or for that
matter after the Singularity.

But that would require specific software support, and the software support
would be expensive. At present it seems like no one much cares about the
Singularity, so expensive safety options are pretty much out of the
question. Sad. Pathetic, in fact. But I can't control humanity's
choices, only my own.

Aside from that, I see at least one major problem with the set of
precautions you proposed. You suggested separation of the system
architects from the development environment, which strikes me as both
infeasible, and suboptimally safe. Remember that Friendly AI is much
harder as a theoretical problem than as a trust problem. The theoretical
problem is harder because you can't solve it by throwing up security
walls. Security measures are one thing, but anything that actually
reduces the ability of the system architects to solve the problem of
*building* Friendly AI... no. You don't have that kind of safety margin;
or not knowably so, at any rate.

That's the problem with outsiders making up security precautions for the
project to take; at least one of them will, accidentally, end up ruling
out successful Friendliness development.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT