From: Michael Roy Ames (michaelroyames@yahoo.com)
Date: Mon Mar 15 2004 - 11:48:28 MST
Michael Anissimov,
I enjoyed reading your comments about McKenna and Pesce.
In reference to:
"If some idiot walks into the AI lab
just as hard takeoff is about to
commence, and spills coffee on the
AI's mainframe, driving it a bit nutty,
then the whole of humanity might be
destroyed by that tiny mistake."
This is a poorly imagined scenario. An accident so overt and random would
not perturb a properly constructed AI. It might shut down the hardware,
thus terminating the instantiation, but could not perturb it such that it
became 'a bit nutty'. For a small perturbation to make an AI "go bad" it
would have to be very, very poorly designed - it would have to be nutty to
begin with. A poorly designed AI is something to be avoided like the
plague... the inoculation against such a plague being a well constructed
Friendly AI.
To clarify, let me suggest an alternate imaginary scenario:
"If some idiot learns just enough about
AI theory to construct a working prototype,
but not enough to ensure it remains
friendly to other beings, and humans
in particular, then the whole of
humanity may be destroyed by that
person's well meaning, but ultimately
disastrous efforts."
Michael Roy Ames
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:46 MDT