From: Russell Wallace (firstname.lastname@example.org)
Date: Fri Jun 17 2005 - 09:54:59 MDT
On 6/17/05, p3 <email@example.com> wrote:
> I don't understand why the development of molecular
> nanotechnology will mean the inevitable destruction of
> all things everywhere (on earth, at least)
It won't in and of itself - essentially, time and evolution will do
that; but what nanotechnology will do is, by making matter as
malleable as software, greatly accelerate the timescales involved.
> or why the
> development of smarter-than-human intelligence will
> somehow avoid this disaster.
Again, AI won't in and of itself avoid disaster, but it appears to be
a prerequisite tool for any workable strategy for disaster avoidance
in the long term; here's my attempt at explaining why:
This archive was generated by hypermail 2.1.5 : Wed May 22 2013 - 04:01:06 MDT