From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Apr 01 2003 - 07:08:50 MST
Samantha Atkins wrote:
> Eliezer S. Yudkowsky wrote:
>
>> The point is that the banana test still works. *Zero* intervention is
>> not moral. You can always hypothesize changes too huge and too fast
>> for people to cope, in which case I would also currently agree/guess
>> that it is not "help" to change people's environments, much less the
>> people themselves, at speeds that exceed their ability to cope. But
>> just because you can imagine changes huge enough (if adopted
>> instantaneously) to effectively kill people, it does not follow that
>> there's anything wrong with giving a friend a banana. It's just a
>> banana, for crying out loud. So if I ask for a banana and I don't get
>> one, I can guess that there are no friends around.
>
> Depends on how big the "banana" is. You went on in the post responded
> to to say that suffering of some degree or another is proof enough that
> a Friendly AI is not present. That is a pretty big banana.
What I said is that suffering of the degree present in our world is proof
enough that a Friendly AI is not present. If you wanted to generalize,
perhaps, you could generalize to involuntary suffering, or involuntary
severe simple pointless suffering (a sufficient but not necessary condition).
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT