From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Tue Sep 28 2004 - 06:41:34 MDT
Randall Randall wrote:
>
> This is also my view. It's just that without nanotech, there's
> no real hope of being able to go far enough to be unavailable to
> those desirous of conflict. Basically, my whole problem with
> SAI building is that it seems to be, by the admission of those
> involved, a winner-take-all proposition right now. Even if
> molecular manufacturing was certain to decimate 99% of the human
> race, it would be better than an SAI with a 99% chance of being
> benign. However, if FTL is possible, we may have to face that
> choice anyway.
Unfortunately, this isn't an either-or proposition. Building a nanofactory
does not prevent anyone else from creating an AI. Quite the opposite. It
gives them 10^25 ops/sec or some other ridiculous amount of computing power
that lets them brute-force the problem even if they don't have the vaguest
idea of what they're doing. If you solve the FAI problem, you probably
solve the nanotech problem. If you solve the nanotech problem, you
probably make the AI problem much worse. My preference for solving the AI
problem as quickly as possible has nothing to do with the relative danger
of AI and nanotech. It's about the optimal ordering of AI and nanotech.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:49 MDT