From: Bill Hibbard (test@demedici.ssec.wisc.edu)
Date: Sun May 18 2003 - 15:33:16 MDT
On Sat, 17 May 2003, Brian Atkins wrote:
> Bill Hibbard wrote:
> > chapter. It will be impossible for regulation to prevent all
> > construction of unsafe AIs, just as it is impossible to prevent
> > all crimes of any kind. But for an unsafe AI to pose a real
> > threat it must have power in the world, meaning either control
> > over significant weapons (including things like 767s), or access
> > to significant numbers of humans. But having such power in the
> > world will make the AI detectable, so that it can be inspected
> > to determine whether it conforms to safety regulations.
>
> We are worrying about an UFAI that has reached the level of intelligence
> to be dangerous, yes? Somewhere above human level. AND, it has at least
> one very willing human servant at its disposal. Even IF (a huge and
> quite unlikely if IMO) it can't use the Net undetected to get what it
> needs done, the human will do its bidding. I'm sorry, but your scheme
> described above looks rather silly in the face of such an adversary. The
> current government can't even catch lowly human-intelligence-level
> terrorists very well.
I'm not going to argue that a human accomplice cannot do a
lot of damage. And I do not claim that eliminating unsafe
AI is a sure thing.
My real point is that without real political commitment and
resources for regulating AI, unsafe AI is a sure thing.
> Remember, a being with such an intelligence level may very well not
> pursue power through the kinds of traditional means you may be
> imagining. It could for instance, if it was good enough, come up with a
> plan to build some working molecular nanotech and from there do whatever
> it wants. And it might be able to execute such a plan in an undetectable
> fashion, barring a 100% worldwide transparent society.
Nanotech and genetic engineering of micro-organisms are
huge threats independent of AI. I think human society will
devote large resources to detection networks for these
things (this already exists to some extent for micro-
organisms). It is far from a sure thing that humanity will
survive these threats. There is a new book, "Our Final
Hour", about these threats and the very real possibility
that humanity will fail to meet them.
But with these threats and the threat from unsafe AI, the
only hope is a strong political commitment to do what is
necessary to meet them.
> > The danger of outlaws will increase as the technology for
> > intelligent artifacts becomes easier. But as time passes we
> > will also have the help of safe AIs to help detect and
> > inspect other AIs.
> >
>
> Even in such fictional books as Neuromancer, we see that such Turing
> Police do not function well enough to stop a determined superior
> intelligence. Realistically, such a police force will only have any real
> chance of success at all if we have a very transparent society... it
> would require societal changes on a very grand scale, and not just in
> one country. It all seems rather unlikely... I think we need to focus on
> solutions that have a chance at actual implementation.
I never said that safe AI is a sure thing. It will require
a broad political movement that is successful in electoral
politics. It will require whatever commitment and resources
are needed to regulate AIs. It will require the patience to
not rush.
By pointing out all these difficulties you are helping
me make my case about the flaws in the SIAI friendliness
analysis, which simply dismisses the importance of
politics and regulation in eliminating unsafe AI.
Cheers,
Bill
----------------------------------------------------------
Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706
test@demedici.ssec.wisc.edu 608-263-4427 fax: 608-263-6738
http://www.ssec.wisc.edu/~billh/vis.html
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT