Re: Fighting UFAI

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Jul 11 2005 - 21:05:34 MDT


Phillip Huggan wrote:
> /*Michael Anissimov <michaelanissimov@gmail.com>* wrote:/
> /Unfortunately, the bootstrap curve for seed AI seems steep enough that
> by the time an emergent UFAI is noticed, it's very likely time has
> already run out. Ruling out the possibility of a false alarm and
> confirming that the emerging seed is unFriendly would take even more
> time. Remember that a bootstrapping AI will most probably be thinking
> and acting very rapidly compared to humans - /
>
> / /But AI is still dependant upon the speed of human infrastructure in
> the early stages. It could probably get all the microscopy/lab
> equipment it needs couriered to some location from hardware stores, but
> unless a very foolish person with mechanical aptitude could be convinced
> to aid AI,

Geez! WHAT are the odds of THAT! How likely is it that an inhumanly
persuasive mind could find, somewhere on the planet Earth, a person willing to
follow phoned instructions about mixing couriered test tubes, in exchange for
a million dollars and a sob cover story about a dying twelve-year-old girl who
needs a cure in the next 48 hours?

Don't assume an UFAI can't do something unless you've spent at least 1 hour
sitting in lotus position trying to think of a way to do it yourself. Even
then, don't assume it, but at least you'll learn something about the
terrifying power of creativity.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT