Re: Six Places to Nuke When You're Serious

From: H C (lphege@hotmail.com)
Date: Tue Aug 08 2006 - 20:12:14 MDT


>From: "Mike Dougherty" <msd001@gmail.com>
>Reply-To: sl4@sl4.org
>To: sl4@sl4.org
>Subject: Re: Six Places to Nuke When You're Serious
>Date: Tue, 8 Aug 2006 21:06:02 -0400
>
>What is the purpose of discussing this concept? Is the idea to better
>defend against any of the proposed attacks? Is there some way friendly AI
>could think of every target before we do and direct resources to prevent
>it? I suggest that unlike the blog, that attack plans here might include
>plausible preventative countermeasures - else why bother?
>
>On 8/8/06, Michael Anissimov <michaelanissimov@gmail.com> wrote:
>>
>>Use the <i> and </i> tags to put someone else's quote in italics and
>>respond below.
>>
>>-Michael
>>

I think this is an awesome blog post (as usual). I think this kind of
discussion is becoming increasingly important... war is all around us. While
I believe it is unlikely war will significantly affect the arrival of the
Singularity, I also think it's extremely important to avoid another
Holocaust (or worse) before the Singularity. Radical Islam is an incredibly
dangerous meme that is undergoing explosive growth right now. The extremists
are like HIV and Earth is showing the early symptoms of AIDS.

And you know we have no technology capable of countering such a powerful
virus right now, only a radiation treatment that involves a lot of
collateral damage.

But that is all a political digression... such problems would be most safely
and effectively solved with Friendly AGI. Of course that subsumes you don't
do much, much worse when you build the AGI (if you have a technology capable
of solving problems of that magnitude, you have a technology capable of
causing problems of a much, much greater magnitude. Than nuclear war.).

-hank



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT