From: Eliezer S. Yudkowsky (firstname.lastname@example.org)
Date: Fri May 30 2003 - 05:50:53 MDT
Ben Goertzel wrote:
>>>> One could construct extreme cases of human minds that were
>>>> strongly self-determined yet were morally and aesthetically
>>>> repugnant to all of us...
>>> That's not the question, though; the question is whether we, or a
>>> Friendly AI, should interfere with such a mind.
> A nontrivial moral question, right?
Certainly nontrivial in its process. The output might be "No", or
something more complex.
> As a parent, I would interfere with my child's self-determination if
> they were going to do something sufficiently destructive. I'd also try
> to change them so that they no longer *wanted* to do the destructive
Angels and ministers of grace preserve us, Ben, I hope you were talking
about an AI and not a human child! Just reach into a human mind and
tamper like that? Thank Belldandy my own parents didn't have that
capability or I'd be a nice, normal, Orthodox Jew right now.
> Because we have values for our kids that go beyond the value of
What about the kids' values for themselves? Parents don't own children.
> But we presumably don't want a Friendly AI to take on this kind of
> parental role to humans -- because it's simply too dangerous??
Because I think it's wrong. Direct nonconsensual mind modification seems
unambiguously wrong. I'm not as sure about gentle friendly advice given
in the service of humane goals not shared by the individual; that strikes
me as an ethically ambiguous case of "What would humanity want to happen
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT