Re[2]: Suggested AI-Box protocol & AI-Honeypots

From: Cliff Stabbert (cps46@earthlink.net)
Date: Sun Jul 07 2002 - 20:10:21 MDT


Saturday, July 7, 2001, 4:38:59 PM, Michael Warnock wrote:

MW> My own intuition is that the likelyhood of FAI over UAI is
MW> high enough that it is more important to bring it quickly
MW> than safe (if that is possible), because of the various
MW> other existential (or big-step back) events that could occur
MW> in the meantime. None-the-less I feel its important to
MW> discuss strategies for containing code, which, regardless
MW> of it's sentience or friendliness is capable of evolving
MW> and reproducing unlike anything currently infecting the
MW> internet.

One possible approach to making the AI more amenable to study,
analysis and containment would be to simply (presuming non-modifiable
hardware and a very limited communication channel) bring the
clockspeed down.

Without external referents, the AI can have no concept or perception
of time other than number of clock cycles. We can run it at 10GHz or
1Hz and it will have no idea unless we clue it in. Fixing our vt100
transmission speed relative to its clockspeed, e.g. 1 character for
every 1000 of its clocks, would eliminate at least that clue.

Leaving aside all questions of urgency due to other factors, and all
other things being equal -- shouldnt't we have at least *somewhat* better
odds of figuring out what an AI at 0.01% of full speed is doing than one
running at full speed?

For instance, we *might*, based on how we seeded the AI and how ve grew,
have reason to think that this chunk of nodes correlates more strongly
to social/psychological analysis/cognition, and that chunk of nodes
more strongly to spatial/mathematical cognition, etc.

Bringing the clockspeed down -- way down -- during an AI box
experiment would then let us then study such activation patterns at
our leisure, and speculate as to what was going on, without the AI
being able to deduce from an increased latency of our replies that we
were doing so.

The *content* of our communications could still clue it in, but
1) we would have eliminated at least one information channel
2) we would have more time to try to minimize the suspiciousness
   of our responses, to the extreme of e.g. presenting a team member
   who hadn't participated in the analysis with the dialogue so far,
   and ask their immediate response -- since we may be more interested
   the AI's responses or brain patterns than the conversation
3) we may be able to spot it performing more than obviously necessary
   textual analysis on our responses, which could indicate suspicion
   on its part.

Although improbable but not, I think, impossible, we may even be able
to bring the clockspeed down exponentially just as it starts
to accelerate exponentially past human-level intelligence. We could
give it a shot, in any case.

Just to clarify: I am in no way arguing that speeding us up relative
to a transhuman AI could make us "just as intelligent", or that if I
could just run my brain at 10x the clock speed of yours, that would
make me 10x as smart -- but I *am* arguing for *some* relative
improvement, which we can try to maximize by getting the timing right.

--
Cliff


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT