From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Jun 17 2005 - 11:57:27 MDT
p3 wrote:
> I don't understand why the development of molecular
> nanotechnology will mean the inevitable destruction of
> all things everywhere (on earth, at least), or why the
> development of smarter-than-human intelligence will
> somehow avoid this disaster.
>
> Could someone explain this to me? Be gentle, I'm not a
> full fledged singulatarian yet (still slowly climbing
> the shock ladder).
Because by far the simplest and most commercially attractive application of
molecular nanotechnology is computers so ridiculously powerful that not even
AI researchers could fail to create AI upon them. Brute-forced AI is not
likely to be Friendly AI. Hence the end of the world.
Grey goo or even military nanotechnology is probably just a distraction from
this much simpler, commercially attractive, and technologically available
extinction scenario.
Developing AI first won't necessarily avoid exactly the same catastrophe.
Developing Friendly AI first presumably would.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:57 MST