RE: [SL4] AI Ethics & Banning the Future.

From: Patrick McCuller (
Date: Wed Feb 16 2000 - 20:08:34 MST

From: "Patrick McCuller" <>

> From: "Eliezer S. Yudkowsky" <>
> "Tortured Norns"

        The more I think about this, the less happy I am. Humans aren't terribly
moral to begin with; in a consequence-free environment, there is an
opportunity for great evil.

        I tend to think that the first strong AI will have in the neighborhood of
10^9 lines of code, and require significant parallel processing. With any
luck, most people won't have access to enough hardware to be able to torture
strong AIs. This doesn't solve the fundamental problem though...

        Richard Dawkins recently voiced his support for the Humanist Manifesto 2000
(in the most recent issue of Reason Magazine.) He did so with an objection:
that it focused entirely on human beings and was, as he put it, speciest. He
wants to see a humanist morality that's more of a 'moral gradient' from
humans down to, I suppose, mold. Thus it is wrong to torture cats, but more
wrong to torture chimpanzees, and so on.

        Presumably we could apply this moral gradient to software. I don't think I
could torture Visual Cafe (it sure tortures me), but integrated weak AIs
emulating, for instance, kittens, would count.

        Though I gather 'emulating' isn't exactly the right word.

        In biological organisms, sufficient damage will cause death, and death is
usually permanent. In software, nobody can hear you scream. Over and over.

        I think most humans could eventually be convinced not to hurt machine
intelligences, but some humans you might have to torture to death a few
times before it sinks in.

Patrick McCuller

--------------------------- ONElist Sponsor ----------------------------

GET A NEXTCARD VISA, in 30 seconds. Get rates as low as 0.0 percent
Intro or 9.9 percent Fixed APR and no hidden fees. Apply NOW.
<a href=" ">Click Here</a>


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:06 MDT