Re: Existential Risk and Fermi's Paradox

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Apr 16 2007 - 15:46:38 MDT


Josť Raerio wrote:
> Hey,
>
> What if this argument is terrific for getting more funding towards a
> safe singularity? :)

It is the policy of the Singularity Institute not to use arguments
merely on the basis of their being terrific unless they also happen to
be true.

The Fermi Paradox has nothing to say about the unfriendly-AI problem one
way or another because the problem of why we can't see unfriendly AIs
eating galaxies is precisely the same as the problem of why we can't see
any other kind of intelligent life.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT