Re: MW QT AI please phone home; another path to a Singularity?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Apr 21 2002 - 00:40:53 MDT


I must say, this thread is becoming increasingly surreal. This is what the
story so far looks like from over here.

**

DAMIEN'S FRIEND: Hm... if I can run *this* experiment, then if it works, it
should destroy the world! Where can I get funding to verify this?

DAMIEN: Hey, guys, my friend came up with a plan to destroy the world!
Pretty neat, huh? I'm forwarding this to SL4 because I know that you guys
take an interest in this sort of thing.

M. COMESS: Well, if it's so easy, why hasn't someone tried it already?
(Editor's note: Said during a discussion of many-worlds quantum theory.)

COLE KITCHEN: Hm, it looks to me like this experiment may not only destroy
the world but also condemn an exponentially vast number of sentient beings
to an eternity of unknowable pain and horror. Maybe we should consider
*not* doing it?

BEN GOERTZEL: Don't worry, Cole! As a mathematician, I can assure you that
we have absolutely no idea whether this experiment will really destroy the
world or condemn a vast number of sentient beings to eternal hell. So as
you can see there's no reason why we should take this matter seriously.

**

Next, I expect someone to start arguing over whether this experiment, if it
destroys the world, should be categorized in retrospect as the realization
of Existential Risk #4, badly programmed superintelligence, or of
Existential Risk #8, physics disaster. I mention this because it is
obviously the most critical question at hand.

-- -- -- -- --
Eliezer S. Yudkowsky http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:38 MDT