Re: Effective(?) AI Jail

From: Dani Eder (danielravennest@yahoo.com)
Date: Tue Jun 19 2001 - 12:34:56 MDT


  It simply imposes the already-obvious
> condition that the first
> phases of takeover should not cause alarm or
> alertness in the subject.
>

Perhaps a review of the Science Fiction literature
is in order to catalog 'takeover' methods. This would
be a prelude to thinking about countermeasures.

Really bad example: Colossus, the Forbin Project,
by D.F. Jones (if I recall).
Derived countermeasure: Never give a seed AI control
of nuclear weapons :-(

Better countermeasure: Isolate the seed AI at the
level we use for control of ICBM launches. The guy
playing with the seed AI is sealed off in a room with
it. The experiment controller is outside in another
room, and his sole job is to keep a finger on a
deadman switch. The deadman switch is hardwired
to a third room, where another guy controls the
power/flamethrower aimed at cpu/etc, and the lock
releases to the doors. Guy #3 has no informational
input except the light conected to the deadman
switch. Light goes out, he frags the machine.
Otherwise he powers it down at a set time.
Controller is monitoring guy #1 and the seed AI
so if the mere existence of the controller is
revealed, or any 'suspicious' conversation,
off goes the deadman switch. Controller
can't send any messages into the room with the
seed AI, and can't leave his chair without
releasing the deadman switch.

If a seed AI can deduce the existence of a 1 bit
channel without any source of info, and prevent
it's own slagging thereby, it has already reached
'godhood' by my definition.

Dani Eder

__________________________________________________
Do You Yahoo!?
Spot the hottest trends in music, movies, and more.
http://buzz.yahoo.com/



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT