RE: Military Friendly AI

From: Eugen Leitl (eugen@leitl.org)
Date: Sun Jun 30 2002 - 07:01:24 MDT


On Sat, 29 Jun 2002, Ben Goertzel wrote:

> However, by intelligently restricting the grammar of outgoing
> requests, one can go very far in this direction.

I do not see how it helps. You can realtime-block known exploit patterns,
but you can't meaningfully restrict the requests which happen to remotely
trigger an hitherto unknown vulnerability (though you can probably detect
a blatant brute-force search for buffer overruns -- stealth scans,
especialy distributed, fall completely under your radar). As soon as a
single vulnerability is found, the entire class of systems (even static
diversity is pretty much nonexistant in the current landscape and software
model) is few steps away from being under your control. With current state
of the art even moderately smart but distributed attacker can take over
>90% of all online nodes without trying hard.
 
> The Net is a tremendous information resource for a growing AGI
> baby.... Not using it at all, is simply not an option.

I notice you dismissed most basic security measures I mentioned as way
premature. While I agree that they're currently ridiculous, clearly there
is a threshold were they need to be engaged. Do you have a guard in place
triggering on the threshold of some sum of behaviour observables, apart
from what your intuition tells you?
 
> One option we've considered is to create a huge mirror of a large
> portion of the Net, for the system's use. However, this would cost
> mucho dinero!

All current search engines maintain large fraction of the web in their
cache. I think it should be easy to arrange to have an air-gapped AI
reading a large fraction of that. Google has been known to try strange
things in R&D. Clearly there's a tremendous market to nave e.g. a natural
language interface finding facts in iterative user sessions.

Have you tried talking to them? This, of course, also/especially applies
to Cyc.
 
> If a diverse committee of transhumanist-minded individuals agreed that going
> ahead with a Novamente-lauchned singularity was a very bad idea, then, I
> would not do it.

Fair enough. Notice that transhumanists are self-selected. If you would
consult a commitee of Singularitarians that believe that Singularity is
inherently good, whether we people make it, or not, then your answer is
entirely predictable. It is very easy to engineer the outcome towards what
you want to hear by jiggling the composition, and apply what constitutes
an acceptable and unacceptable member.
 
> I honestly do not believe we're *ever* going to be able to reduce "the
> probability that an AI goes rogue at some point in the far future" to
> less than .01%, with any meaningful degree of confidence. This kind
> of certainty is not going to be available pre-Singularity.

I agree. In fact the error range you cited is quite absurd. Because far
future is not so far removed for human observers at superrealtime (factor
of ~10^6) rates, even small probabilities do cumulate. A simplistic
picture: longterm evolution is intrinsically unknowable. Cumulating
probabilities is an extremely naive model.



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT