From: Phillip Huggan (cdnprodigy@yahoo.com)
Date: Mon Jun 20 2005 - 21:51:03 MDT
p3 wrote:
> I don't understand why the development of molecular
> nanotechnology will mean the inevitable destruction of
> all things everywhere (on earth, at least), or why the
> development of smarter-than-human intelligence will
> somehow avoid this disaster.
Eliezer replied: "Because by far the simplest and most commercially
attractive application of
molecular nanotechnology is computers so ridiculously powerful that not even AI researchers could fail to create AI upon them. Brute-forced AI is not
likely to be Friendly AI. Hence the end of the world."
It should be added that MNT applications are far less likely to be missed by irresponsible entities such as covert government programmes or corporations, than is GAI. MNT achieved by a responsible entity with enough lead time over other MNT or AGI efforts, could depopulate the world of potentially dangerous singularity-ish research programmes and other extinction hazards until minds could be pooled to maximize the odds of a safe introduction of said technologies. But under present realities, an AI of less than assured friendliness might be less risky than facing the run-up to MNT; turning on a "buggy" GAI system might be the only defence against an emergent MNT enabled tyranny.
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
This archive was generated by hypermail 2.1.5 : Tue Feb 21 2006 - 04:22:57 MST