From: Phillip Huggan (firstname.lastname@example.org)
Date: Mon Jul 11 2005 - 20:31:36 MDT
Michael Anissimov <email@example.com> wrote:
Unfortunately, the bootstrap curve for seed AI seems steep enough that
by the time an emergent UFAI is noticed, it's very likely time has
already run out. Ruling out the possibility of a false alarm and
confirming that the emerging seed is unFriendly would take even more
time. Remember that a bootstrapping AI will most probably be thinking
and acting very rapidly compared to humans -
But AI is still dependant upon the speed of human infrastructure in the early stages. It could probably get all the microscopy/lab equipment it needs couriered to some location from hardware stores, but unless a very foolish person with mechanical aptitude could be convinced to aid AI, some sort of robotics will be needed for assembly. A decade from now, there might only a few hundred fab/robotics plants in the world capable of aiding an AI. Perhaps beefing up log-books techniques and computer security at these locations (and maybe setting some AI traps), are all that's needed to prevent UFAI from immediately gaining MM. If communications infrastructures in the future contain quantum encryption schemes, UFAI may never be able to get loose, at least not until FAI or human MM counter-measures are developed.
Sell on Yahoo! Auctions - No fees. Bid on great items.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT