From: Mark Nuzzolilo II (nuzz604@gmail.com)
Date: Mon Jun 19 2006 - 19:14:13 MDT
----- Original Message -----
From: "H C" <lphege@hotmail.com>
To: <sl4@sl4.org>
Sent: Sunday, June 18, 2006 1:21 PM
Subject: Mind Reading - Approach to Friendliness
> Concept is pretty simple. You develop really powerful analytical/debugging
> applications that can display the entire contents of the AI mind in
> tractable and useful visual representations and extrapolations into future
> states of the mind.
>
> Strategy:
> Let the AI 'sleep' (i.e. down-time) long enough to periodically analyze
> the entire contents of the mind. The point in the analysis is to isolate
> areas of potential risk/danger and either directly modify/secure these
> areas, or to instruct the AI via it's communication channel with the
> programmers (and obviously check up on the AI's obediance).
>
> Theoertically, the only window for danger would be in the period it is
> awake and thinking. It would need to come to several conclusions
> simultaneously that all affirmed some non-Friendly behavior, and develop
> that intention into a non-Friendly action before it went to sleep.
>
> A strategy to combat this possibility would be to develop dynamic
> diaognostic software, that could actively monitor the entire range of the
> AI's mental and external actions. A comprehensive security system would
> need to be developed to set alerts, automatic shut downs, security
> warnings, and anything abnormal or potentially remarkeable.
>
> The point of implementing this strategy is to allow a non-verifiably
> Friendly AGI to help the programmers and mathematicians developing
> Friendliness theory in a relatively safe and reliable manner.
>
> -Hank
>
>
I don't believe that this will allow ordinary humans to detect all, or even
most of the possible failures (you are viewing the present state of the
system only, and that says nothing about future states). With such a
system, you may be incredibly lucky enough to detect a major fault early
enough to make it less dangerous (or friendly, if such a thing is
practically possible), or to shut it down, but the fact of the matter is
that the time and resources required to design and implement such a system
seems to me that it would do more harm than good. Someone else could build
an unfriendly AI while you are working on a system that may make little or
no difference at all in the end result.
I won't take the naive approach to suggesting a better alternative, and what
do any of us really know at this point? I say wait a few more years, and at
least get to the point where preliminary AGI experimentation is starting to
emerge so that you can have a foundation to start with.
Nuzz
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT