RES<: Leitl's objections to non-brute force seed AI and Friendliness

From: Anand AI (
Date: Mon Jun 17 2002 - 00:40:23 MDT

Eugen Leitl:

Please consider summarizing, or referencing, your objections to non-brute
force seed AI development and SIAI's theoretical work on Friendliness. In
response, I would ask that Ben, Eliezer, Peter, and others, to consider
providing refutations, or referencing specific refutations, to Eugen's
objections. This information would assist some of my activities, and
possibly the activity of others.

Please consider fulfilling this request in the near-future. The request has
been prompted by some of your remarks to the SL4 and Extropy mailing lists
in the past few weeks. For example:

Eugen Leitl wrote to Extropy:
>If we get a critical AI seed we're likely dead meat, anyway.

Eugen Leitl wrote to Extropy:
>On Thu, 13 Jun 2002, Smigrodzki, Rafal wrote:
> > ### Well, in that case indeed there is nothing to fear in the
> > foreseeable future, and you can safely dismiss Eli, instead
> > of expressing outrage at his efforts.
>If I thought such efforts had a chance I'd use plastique, not words. But
>the memes are dangerous, since possibly motivating people who'd have a
>a higher chance of succeeding.

Eugen Leitl wrote to SL4:
>Gordon Worley wrote:
> >
> > Some of us, myself included, see the creation of SI as important enough
> > to be more important than humanity's continuation. Human beings, being
>I hope you don't mind, but if you honestly think that, and you have a
>nonzero chance of succeeding in most people's value system you've just
>earned the privilege to be either incarcerated in maximum security for
>life, or killed on sight, whatever comes first. (In case there are doubts
>to above, I don't mind executing you in person. Okay?)

Thanks in advance.



Chat with friends online, try MSN Messenger:

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT