From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Jun 25 2002 - 13:07:37 MDT
James Higgins wrote:
 > At 11:02 AM 6/25/2002 -0600, Ben Goertzel wrote:
 >
 >> But I also don't trust YOUR, or MY, or anyone else's theory
 >> of "how to make AI's friendly" or "how to make the Singularity come out
 >> well."   It worries me that you are so confident in your own theory of
 >> how to make the Singularity come out well, when in fact you like all
 >> the rest of us are confronting an unknown domain of experience, in
 >> which all our thoughts and ideas may prove irrelevant and overly
 >> narrow.
 >
 > Wow, Ben, you managed to exactly nail the issue I have with Eliezer's
 > efforts.  Excuse me, but we are all IGNORANT when it comes the
 > Singularity, Eliezer, Ben, myself and everyone else included.  Having a
 > strong belief to the contrary while attempting to create the Singularity
 >  ASAP is rather frightening.  And I don't have a tendency toward
 > irrational fear, just self preservation.
Don't worry.  My alleged "self-confidence" is Ben's invention.  I happen to 
be fairly confident that many of Ben's theories are wrong; it's not at all 
the same as being confident that my own theories are right.
Nobody who thought the Singularity was understandable would ever have 
invented Friendly AI.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT