From: Carl Feynman (carlf@abinitio.com)
Date: Tue May 22 2001 - 10:03:06 MDT
my_sunshine wrote:
> Just to throw out another idea: Is it really necessary to make the first AI
> friendly? It seems that, if the AI is constrained within a particular data-
> space, it would have no means of manipulating the brick-and-mortar world. If
> this is the case, and unplugging an AI is not ethically objectionable, couldn't
> one launch an AI, see if it works, and then turn it off? (Uhh- FAI heresy!)
>
> Could it, within the window of the experiment, become smart enough to (1) hack
> out of its dataspace (if this is not theoretically impossible) and (2) either
> (a) realize that it should escape in order to satisfy some goal or (b) escape
> by chance?
If it is smarter than any human, and can talk, it could convince someone to let it
out. I'm an intelligent fellow, but on at least one occasion, I've been conned out
of a substantial chunk of money by another intelligent fellow spinning plausible
lies. How much easier would it be to fool me if the con artist was as much smarter
than me than I am to a 5-year-old?
You could say, lets not let it out until we're really, really, really sure. But it
could understand this, and pretend to be nice (or Friendly) until it's too late.
It's a much better to build it in such a way that we can convince ourselves of its
Friendliness by inspecting its data structures and/or controlling its design.
--Carl Feynman
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT