From: Heartland (email@example.com)
Date: Fri Jun 17 2005 - 21:00:51 MDT
> I don't understand why the development of molecular
> nanotechnology will mean the inevitable destruction of
> all things everywhere (on earth, at least), or why the
> development of smarter-than-human intelligence will
> somehow avoid this disaster.
Eliezer replied: "Because by far the simplest and most commercially
attractive application of
molecular nanotechnology is computers so ridiculously powerful that not even
AI researchers could fail to create AI upon them. Brute-forced AI is not
likely to be Friendly AI. Hence the end of the world."
Oh, so that's what that
we-need-to-build-Friendly-AI-before-nanotech-or-we'll-die thing was about.
For the last 3 years I've been under impression that SIAI was building
Friendly AI to avert the end of the world caused by inevitable grey goo.
This didn't seem to me as a good enough reason to take an even greater
existential risk and attempt to create FAI. Now, this justification is much
more logical. I suggest you make this explanation more visible in SIAI
introductory material to avoid misinterpretation of the main reason why SIAI
is trying to finish building FAI before the arrival of nanotechnology.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT