Re: I am a moral, intelligent being (was Re: Two draft papers: AI and existential risk; heuristics and biases)

From: Ben Goertzel (
Date: Tue Jun 06 2006 - 12:33:15 MDT

Hi Robin,

Perhaps I mis-stated Hugo's opinion...

I am sure that he does not think a kind, moral superintelligent being

What he thinks, rather, is that making any kind of GUARANTEE (even a
strong probabilistic guarantee) of the kindness/morality/whatsoever of
a massively superhumanly intelligent being is almost sure
impossible... no matter what the beings *initial* design...

This is a very different statement.

-- Ben

On 6/6/06, Eliezer S. Yudkowsky <> wrote:
> Robin Lee Powell wrote:
> >
> > It blows my mind that any intelligent and relevantly-knowledgeable
> > person would have failed to perform this thought experiment on
> > themselves to validate, as proof-by-existence, that an intelligent
> > being that both wants to become more intelligent *and* wants to
> > remain kind and moral is possible.
> >
> > Really bizarre and, as I said, starting to become offensive to me,
> > because it seems to imply that my morality is fragile.
> While I agree in general terms with your conclusion, I feel obliged to
> point out that being personally offended by something is not evidence
> against it.
> --
> Eliezer S. Yudkowsky
> Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:56 MDT