From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Fri Apr 13 2001 - 13:16:32 MDT
Declan McCullagh wrote:
> What, do you think that corporate spies or government ones won't
> be able to acquire your code on way or another?
I think that by the time they want to acquire the code, "the code" will be
capable of defending itself, will be capable of figuring out that it's
been kidnapped, and will be too complex (or deliberately self-obscured) to
be passively modifiable in the face of self-created safeguards, by any
unassisted human programmer who steals it.
Before that time period, "the code" may be doing a few cool things, but
nothing that would make anyone want to take the risk of illegal action to
acquire it. Any AI that can't rewrite verself is probably not a threat to
humanity, even if stolen and corrupted.
If there's still a window of vulnerability after all these considerations
have been discarded, then I suppose we'd do our best to defend the code
using ordinary data-security techniques. If those fail, then I suppose
the entire planet will be destroyed, in the worst case anyhow. Why do you
ask? It's not something SIAI needs to worry about until after we have
working code, and after we have working code a lot of new options open up.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT