Re: 6 points about Coherent Extrapolated Volition

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Jul 25 2005 - 09:49:15 MDT


Robin Lee Powell wrote:
> On Mon, Jul 25, 2005 at 02:13:26AM +0100, Russell Wallace wrote:
>
>>On 7/25/05, Eliezer S. Yudkowsky <sentience@pobox.com> wrote:
>>
>>>Hold on a second. CEV is not a subjunctive planetkill until I
>>>say, "I think CEV is solid enough that we could go ahead with it
>>>if I just had the AI theory and the funding". I never said
>>>that.
>>
>>Okay, if and when you do say that it'll become your third
>>subjunctive planet kill (assuming Whimpers and Shrieks count as
>>well as Bangs). Someone please find a deadly flaw in Domain
>>Protection so I can start catching up :)
>
> Umm. Eliezer has never said that about *any* idea he's had. How do
> you count three?

"Plan to Singularity" #1
"Creating Friendly AI" #2

CFAI did *not* propose to explicitly design a Sysop scenario, but *did*
propose to start out by binding the AI to humane emotions as its referent, and
*did* say, "If we have the funding, we should build right away, and work out
the FAI part as we go along," based on the stated assumption that the AI part
was ten times as much work as the F part.

I first realized that almost every possible AI design *would* kill me, not
"might" but "would", in 2002. Until you realize on a gut level that Nature is
trying to kill you, nothing you do is serious safetywise, no matter how hard
you try to be professional about it. And of course I've never seen anyone
else even try to be professional, come to think.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT