Re: Collapsarity

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Mar 31 2003 - 11:49:42 MST


Samantha Atkins wrote:
>
> Wait just a second. You do support the right of sentients to
> self-determine including the right to tell the Friendly AI to stay out
> of their affairs unless they ask for its help, I believe. If so then
> some suffering is perfectly consistent with a Friendly AI as such. The
> question then becomes what happens when the sentient does ask for an end
> to their suffering. I am not at all sure that it would be in the
> sentients best interest and thus truly friendly for the FAI to simply
> fix anything and everything in the sentients space or nature that led to
> the suffering. Remember that the cause of much suffering of a sentient
> is due to internal characteristics, beliefs, programming, whatever of
> said sentient. To simply remove/change all of those immediately would
> likely damage the identity matrix of the sentient and/or have many
> unforeseen (by the sentient) consequences not desired. So again, it is
> not at all obvious that the FAI would remove all suffering. Medieval
> torture chamber yes, rewiring brains to not be instrumental in their own
> suffering? I have strong doubts that would be unambiguously moral.

The point is that the banana test still works. *Zero* intervention is not
moral. You can always hypothesize changes too huge and too fast for
people to cope, in which case I would also currently agree/guess that it
is not "help" to change people's environments, much less the people
themselves, at speeds that exceed their ability to cope. But just because
you can imagine changes huge enough (if adopted instantaneously) to
effectively kill people, it does not follow that there's anything wrong
with giving a friend a banana. It's just a banana, for crying out loud.
So if I ask for a banana and I don't get one, I can guess that there are
no friends around.

-- 
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT