From: Russell Wallace (russell.wallace@gmail.com)
Date: Fri Jan 28 2005 - 10:37:48 MST
On Fri, 28 Jan 2005 11:02:39 -0500, Eliezer S. Yudkowsky
<sentience@pobox.com> wrote:
>
> "Obvious" simply DOES NOT COUNT on the scientific frontier. *Any*
> scientific frontier, not just those on which lives depend. Nature has this
> disconcerting habit of coming back and saying "So what?"
Which is why I agree with you about the attitude AI projects should
take with regard to safety. If I'm right about the thresholds
involved, implementing safety precautions in the next decade or so
will merely waste a small amount of time. If you're right, failure to
implement such could potentially destroy the world. Therefore all AI
projects should put safety first - the analogy with "always assume a
gun is loaded even when you know it isn't" is a good one.
- Russell
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:50 MDT