Re: Existential Risk and Fermi's Paradox

From: Stathis Papaioannou (stathisp@gmail.com)
Date: Thu Apr 19 2007 - 21:57:26 MDT


On 4/20/07, Gordon Worley <redbird@mac.com> wrote:

The theory of Friendly AI is fully developed and leads to the
> creation of a Friendly AI path to Singularity first (after all, we
> may create something that isn't a Friendly AI but that will figure
> out how to create a Friendly AI). However, when this path is
> enacted, what are the chances that something will cause an
> existential disaster? Although I suspect it would be less than the
> chances of a non-Friendly AI path to Singularity, how much less? Is
> it a large enough difference to warrant the extra time, money, and
> effort required for Friendly AI?

Non-friendly AI might be more likely a cause an existential disaster from
our point of view, but from its own point of view, unencumbered by concerns
for anything other than its own well-being, wouldn't it be more rather than
less likely to survive and colonise the galaxy?

Stathis Papaioannou



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT