From: Randall Randall (randall@randallsquared.com)
Date: Fri Jun 17 2005 - 10:22:31 MDT
On Jun 17, 2005, at 11:44 AM, p3 wrote:
> I don't understand why the development of molecular
> nanotechnology will mean the inevitable destruction of
> all things everywhere (on earth, at least), or why the
> development of smarter-than-human intelligence will
> somehow avoid this disaster.
>
> Could someone explain this to me? Be gentle, I'm not a
> full fledged singulatarian yet (still slowly climbing
> the shock ladder).
Some singularitarians (me) expect that nanotech will be far
safer than AI, and far easier. One reason nanotech is safer
is that it's going to be fairly vulnerable to heat. The
business end of grey goo should be easy to disrupt, so any
really widespread nanotech disasters will be deliberate, and
we've been dealing with the possibility of that for more
than half a century, so it's a known danger, rather than the
largely unknown one of superintelligence.
To put it another way, some of the species will very likely
survive grey goo no matter how bad it gets, given nukes and
the possibility of leaving quickly using assemblers. An AI,
in contrast, is a binary gamble: if a hard take-off SAI is
possible, then the only possibility of safety in that event
is if the AI builders get it exactly right, a possibility
that even some of them consider remote.
The best hope, in my opinion, is that AI turns out to be
too hard to do in the next 20 years without nanotech-built
hardware, so that there's some chance of escape for those
who'd rather not personally be around when it emerges, and
molecular assembly becomes widespread within the next 3-15
years.
Most here will probably disagree with me, of course. :)
-- Randall Randall <randall@randallsquared.com> "Lisp will give you a kazillion ways to solve a problem. But (1- kazillion) are wrong." - Kenny Tilton
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT