Re: Collapsarity

From: Samantha Atkins (samantha@objectent.com)
Date: Tue Apr 01 2003 - 02:02:42 MST


Eliezer S. Yudkowsky wrote:
> Samantha Atkins wrote:
>
>>
>> Wait just a second. You do support the right of sentients to
>> self-determine including the right to tell the Friendly AI to stay out
>> of their affairs unless they ask for its help, I believe. If so then
>> some suffering is perfectly consistent with a Friendly AI as such.
>> The question then becomes what happens when the sentient does ask for
>> an end to their suffering. I am not at all sure that it would be in
>> the sentients best interest and thus truly friendly for the FAI to
>> simply fix anything and everything in the sentients space or nature
>> that led to the suffering. Remember that the cause of much suffering
>> of a sentient is due to internal characteristics, beliefs,
>> programming, whatever of said sentient. To simply remove/change all
>> of those immediately would likely damage the identity matrix of the
>> sentient and/or have many unforeseen (by the sentient) consequences
>> not desired. So again, it is not at all obvious that the FAI would
>> remove all suffering. Medieval torture chamber yes, rewiring brains
>> to not be instrumental in their own suffering? I have strong doubts
>> that would be unambiguously moral.
>
>
> The point is that the banana test still works. *Zero* intervention is
> not moral. You can always hypothesize changes too huge and too fast for
> people to cope, in which case I would also currently agree/guess that it
> is not "help" to change people's environments, much less the people
> themselves, at speeds that exceed their ability to cope. But just
> because you can imagine changes huge enough (if adopted instantaneously)
> to effectively kill people, it does not follow that there's anything
> wrong with giving a friend a banana. It's just a banana, for crying out
> loud. So if I ask for a banana and I don't get one, I can guess that
> there are no friends around.
>

Depends on how big the "banana" is. You went on in the post
responded to to say that suffering of some degree or another is
proof enough that a Friendly AI is not present. That is a
pretty big banana.

- samantha



This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT