From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Apr 19 2001 - 14:41:14 MDT
(Incidentally, we didn't release the Guidelines. The Guidelines get
released later, probably around June. FAI is what we'll use as the
technical background for the Guidelines.)
Welcome to my life; I have the impossible task of simultaneously
convincing AI researchers that they have a professional obligation to be
paranoid, convincing futurists not to worry about the wrong
(anthropomorphic) malfunctions, and convincing the general populace that
AIs can be made at least as trustworthy as humans or human organizations.
For an extra bonus, I shall use arguments which are still convincing to
all three demographics after being filtered through someone else's
keyboard.
-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:36 MDT