Re: [sl4] A model of RSI

From: Matt Mahoney (
Date: Sat Sep 27 2008 - 16:00:26 MDT

--- On Fri, 9/26/08, Eric Burton <> wrote:

> >I have a hard time picturing what disaster caused by a
> >failure to achieve AI would be
> >worse (in the context of human ethics) than a gray goo
> >accident killing all DNA based
> >life. Could you elaborate?
> If there were a civilization on Earth capable of producing
> self-reproducing nanomachines so efficient as to produce a risk to
> life and limb, presumably those same people could make a
> counter-agent, or have some other means of containment at hand.

To give a counterexample, suppose we genetically engineered a highly contagious cold virus that transmitted AIDS. We know from experience how microorganisms rapidly develop drug resistance through evolution. Nanotechnology capable of evolving to evade attempts to eradicate it could be far more dangerous. Our best hope would be to create a competing species with higher fitness, but this could also have unforeseen consequences (like accelerating our extinction).

-- Matt Mahoney,

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:03 MDT