RE: SIAI's flawed friendliness analysis

From: Ben Goertzel (
Date: Thu May 29 2003 - 19:49:42 MDT

> >> Volition is a component of "deep, real human happiness" but I
> don't think
> >> it's the only component.
> >>
> >> One could construct extreme cases of human minds that were strongly
> >> self-determined yet were morally and aesthetically repugnant to all of
> >> us...
> >
> > That's not the question, though; the question is whether we, or a
> > Friendly AI, should interfere with such a mind.

A nontrivial moral question, right?

As a parent, I would interfere with my child's self-determination if they
were going to do something sufficiently destructive. I'd also try to change
them so that they no longer *wanted* to do the destructive thing.

Because we have values for our kids that go beyond the value of

But we presumably don't want a Friendly AI to take on this kind of parental
role to humans -- because it's simply too dangerous??

-- Ben G

> Er, to amplify: I was not saying that volition is the only element of
> human happiness, but that it should be substituted into the role
> played by
> "human happiness" in utilitarian schemas. Maybe some people
> don't want to
> be happy; or maybe they have things they value higher than
> happiness. I do.
> --
> Eliezer S. Yudkowsky
> Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT