Re: How hard a Singularity?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Jun 25 2002 - 10:15:51 MDT


Ben Goertzel wrote:
>
> And the point of *that* point was, simply to give another piece of evidence
> that: A delayed Singularity leading to a human-friendly superhuman AGI, is
> probably better than a quicker Singularity leading to a human-indifferent
> AGI. In spite of the potentially large cost of many deaths during the
> delay.

Ben, Nick Bostrom has already written a formal analysis of this one, coming
to the same conclusion; humanity's future is larger than its present, so
*IF* there is a conflict, the moral thing is to increase the probability of
a (safe) Singularity even at the expense of time-to-Singularity. *BUT* the
only consideration I know of in which spending more time could conceivably
buy you anything at all in the way of safety is the Friendly AI part of the
AI problem - and even there, you have to *start* as early as possible
because computers keep getting more powerful and shortening the intrinsic
length of the AI development timeline. Delay is always a very "unnatural"
element in Singularity strategies - the Singularity doesn't *want* to slow
down. Necessary risks are still risks and should be used as sparingly as
possible.

In all other cases, the question is unambiguous; the sooner you make it to
the Singularity, the safer you are. A multipolar militarized technological
civilization containing solely human-level intelligences is just not safe.

http://www.transhumanist.com/Waste.htm

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT