From: Dan Clemmensen (dgc@cox.rr.com)
Date: Sat Aug 04 2001 - 09:39:06 MDT
Amara D. Angelica wrote:
> Brian, those are interesting and important points. I've forwarded your post
> to Ray for his thoughts.
>
> Is there a consensus on this list that the Singularity is good and should be
> accelerated?
>
Short answer: It's good and it should be accelerated.
Long answer: it's inevitable, and the likelihood that the nature of the
initial implementation of the SI will have any effect on its nature
after the first few iterations of self-enhancement is small. Therefore
attempts to retard or defer the singularity in order to guide it are
a waste of time. Since the SI is inevitable, There is no long-term
benefit in deferring it no matter what it is like. Advancing it
may have long-term benefit if the SI is "good" because it may
be of benefit to anyone living at that time, and these benefits do
not accrue to those who would have died during the period of deferral.
BTW, I may still support Elizier's "friendly AI" after I understand
his concept. If implementing "friendliness" is not a major diversion,
then it's good insurance. "Friendliness" may also be an essential
part of technical creativity.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:37 MDT