Re: Military Friendly AI

From: Eugen Leitl (
Date: Fri Jun 28 2002 - 06:05:19 MDT

On Thu, 27 Jun 2002, James Higgins wrote:

> I would tend to worry very little if Ben was about to kick off a
> Singularity attempt, but I would worry very much if you, Eliezer,

The nature of the person is not relevant, the idea is intrinsically evil.
It is a bit ironic that Eliezer, who doesn't trust humanity with taking
care of themselves well enough to survive is attempting to create a fix
that will precipitate exactly what he is trying to prevent from occuring.

Eliezer, your world patch is buggy. We cannot let you apply it.

> were. If you don't understand why I suggest you carefully re-read
> many of your recent posts and have some trusted friends, who can be
> completely honest with you, do the same. You do have, I assume,

It's hopeless. He's got selective agnosia in seeing your own faults

> trusted friends of this nature (although you very well may not since
> your focusing so much energy on getting to the Singularity ASAP).

Is anyone here interested to co-author a short paper on risks in
Singularity AI, and suggested regulations?

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT