RE: Think of it as AGI suiciding, not boxing

From: Phillip Huggan (
Date: Tue Feb 21 2006 - 12:16:00 MST

It is feasible to police as computation increases towards ubiquity if policing implements also increase at the same rate or higher. It isn't happening now, but there is a class of technologies on the horizon that do encompass the mass production of sensor devices. The technical side of things seems feasible (one day, maybe not before AGI is built or something else earth-shattering happens), it is the political and administrative risks that scare me. That is why I'm only advancing a *let's never allow an AGI to get out* vision of the future as a suggestion to chew on.
  I didn't mean to suggest limiting the output of an AGI as the only safeguard. I meant that it might be safer to tack it on top of whatever programmed friendly architectures already exist, than would be allowing the AGI to embark upon a full-throttle engineering project. It seems far from certain to me that we can't screen out all of an AGI's magic. It still has to obey the laws of physics. As a thought experiment, imagine an AGI entombed within a collapsing hollow sphere of mini blackholes. Such an AGI is toast no matter how smart it is if massive engineering precursors don't accompany it. Believing in the certainty of AGI magic means that you believe for all classes of AGI architectures, that its own substrate material offers sufficient resources for it to escape.
  We can't even define this discussion until we identify what AGI magic is. I can't suggest a safer output medium than engineering blueprints if I don't know what the risks are. What are they? Inducing a spiritual experience and hynotism via sensor imput to humans? Mutating nearby virii with EM radiation to make mini robots? Manipulating nearby power grids? I know our physics is presently incomplete. Give me a hint, will the AGI attempt to utilize Gravity magic or everything else combined?

Christopher Healey <> wrote:
> To significantly reduce most extinction threats, you need
> to monitor all bio/chemical lab facilities and all
> computers worldwide. A means of military/police
> intervention must be devised to deal with violators too.
> Obviously there are risks of initiating WWIII and of
> introducing tyrants to power if the extinction threat
> reduction process is goofed. Obviously an AGI may kill us
> off.

Is this really feasible to police as computation continues toward ubiquity? If not, then the rate-limited SAI, and humanity, will eventually be operate in parallel with somebody else's SAI not so constrained. Both would exhibit exponential growth in capability, but the closed-loop SAI would have a much shorter cycle time without an imposed team of "proof technicians" sitting there hitting a slow yes button. And we'll also be discarding all possible "win" solutions that fall outside of our limited proofing abilities. We now face the same risks all over again, but we've crippled our ability to use the first SAI to guard against these risk classes going forward.

> Because it uses mechanical rods and not electricity,
> the possibility of available AGI magic is reduced.

Perhaps electronically-based AGI magic, but we really don't know. This strategy is futile, since we're now trying to plan for unknown effects. It's just as unknowably likely that rod-logic will provide a better substrate on which to execute an exploit against our efforts.

The more of these scenarios I've see posted to this list over the months, the more convinced I become that Friendliness must be an integral component of an AGI design at the most basic level, and at multiple levels. Friendliness-in-depth, as it were. Any AI-Box or firewall-type solution tasked with letting only Friendliness through, even if it was in principle possible, would be a single point of failure on which our entire existence would ultimately rest. Not a responsible design in my book.

 Yahoo! Mail
 Use Photomail to share photos without annoying attachments.

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:55 MDT