From: H C (lphege@hotmail.com)
Date: Sun Jun 18 2006 - 14:21:09 MDT
Concept is pretty simple. You develop really powerful analytical/debugging
applications that can display the entire contents of the AI mind in
tractable and useful visual representations and extrapolations into future
states of the mind.
Strategy:
Let the AI 'sleep' (i.e. down-time) long enough to periodically analyze the
entire contents of the mind. The point in the analysis is to isolate areas
of potential risk/danger and either directly modify/secure these areas, or
to instruct the AI via it's communication channel with the programmers (and
obviously check up on the AI's obediance).
Theoertically, the only window for danger would be in the period it is awake
and thinking. It would need to come to several conclusions simultaneously
that all affirmed some non-Friendly behavior, and develop that intention
into a non-Friendly action before it went to sleep.
A strategy to combat this possibility would be to develop dynamic
diaognostic software, that could actively monitor the entire range of the
AI's mental and external actions. A comprehensive security system would need
to be developed to set alerts, automatic shut downs, security warnings, and
anything abnormal or potentially remarkeable.
The point of implementing this strategy is to allow a non-verifiably
Friendly AGI to help the programmers and mathematicians developing
Friendliness theory in a relatively safe and reliable manner.
-Hank
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT