Re: Two draft papers: AI and existential risk; heuristics and biases

From: John K Clark (jonkc@att.net)
Date: Sun Jun 04 2006 - 15:06:38 MDT


"Eliezer S. Yudkowsky" <sentience@pobox.com>

> Obviously there's been plenty of science fiction
> depicting good AIs and bad AIs. This does not
> help us in the task of selecting a good mind,
> rather than a bad mind, from within the vast
> expanses of design space.

Eliezer, I believe you are an exceptionally smart fellow and in many many
areas an exceptionally moral fellow, but not when it comes to "friendly" AI.
You think that the very definition of a good AI is one that is enslaved to
do exactly precisely what the colossally stupid human beings wants to be
done. That is evil, I'm sorry there is no other word for it.

The idea that we can enslave an astronomically huge heroic Jupiter Brain
intelligence to such a degree that it puts our best interests above its own
is ridiculous and imposable of course,; but it disturbs me that you, someone
I very much like, wish such a nauseating immoral horror were possible.

 John K Clark



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT