From: my_sunshine (sun@faclib-402.unh.edu)
Date: Tue May 22 2001 - 03:03:56 MDT
Just to throw out another idea: Is it really necessary to make the first AI
friendly? It seems that, if the AI is constrained within a particular data-
space, it would have no means of manipulating the brick-and-mortar world. If
this is the case, and unplugging an AI is not ethically objectionable, couldn't
one launch an AI, see if it works, and then turn it off? (Uhh- FAI heresy!)
Could it, within the window of the experiment, become smart enough to (1) hack
out of its dataspace (if this is not theoretically impossible) and (2) either
(a) realize that it should escape in order to satisfy some goal or (b) escape
by chance?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT