From: Rafal Smigrodzki (rafal@smigrodzki.org)
Date: Sun May 18 2003 - 19:21:10 MDT
Bill Hibbard wrote:
>
> I never said that safe AI is a sure thing. It will require
> a broad political movement that is successful in electoral
> politics. It will require whatever commitment and resources
> are needed to regulate AIs. It will require the patience to
> not rush.
### Historically, non-competitive organizations (land monopolies) with very
long feedback loops (e.g. 4-year elections) tend to be very ineffective in
controlling problems resulting from the actions of large numbers of
independent, fast-acting agents. An example is guerilla warfare, and
terrorism, where minimal resources expended by attackers can be only
defeated with extremely damaging, very costly responses of the state. If you
add the problem that an AI would be incomprehensible to the vast majority of
persons involved in the political process, much more incomprehensible and
unpredictable than the average guerilla, failure of the political process to
assure safe AI is guaranteed, except if a global prohibition on all progress
in AI, and computing science in general, was somehow achieved.
If independent development of AI was unsafe, a political process would not
make it any less so.
Rafal
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT