Re: Friendly Existential Wager

From: Eliezer S. Yudkowsky (
Date: Fri Jun 28 2002 - 13:44:34 MDT

Mark Walker wrote:
> E.Y. thinks Friendliness first, B. G. thinks AGI first. Who is right?
> Suppose we don't know. How should we act? Well either attempting to
> design for Friendliness before AGI will be effective in raising the
> probability of a good singularity or it will not.

Actually, my philosophy differs from Ben's in that I think that you need
substantially more advance knowledge, in general, to bring any kind of AI
characteristic into existence, including Friendliness. From Ben's
perspective, this makes me arrogant; from my perspective, Ben's reliance on
emergence is wishful thinking. I do think that understanding of Friendly AI
follows from understanding of AI, rather than the other way around. You
can't have Friendly AI without AI; you can't build moral thoughts unless you
know what thoughts are and how they work.

Eliezer S. Yudkowsky                
Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT