RE: Rationalizing death events (was Inifinite universe)

From: Mike Williams (mikew12345@lvcm.com)
Date: Tue Apr 29 2003 - 21:52:26 MDT


It's easy to think of a situation of sacrificing a few lives to save many.
Perhaps a well-armed terrorist group has a nuclear device in the sewers
under New York, inaccessible to the FAI's resources. Sending in SWAT to
take them out is calculated to result in the loss of 5 good men.

But that's too easy, try this one. Three people are dying; one from a
failing heart, two from failed kidneys. A healthy person who happens to
match their blood and tissue types can be sacrificed to save the other 3.
In making this decision, does it matter whether the healthy person is a
convicted killer and the other 3 are nobel prize winners, or perhaps vice
versa?

This can go on forever with what-if's, such as, how many monkeys is it ok to
sacrifice in order to save 1 human life? I don't know that there are any
easy answers, but I'm curious as to how a FAI might look at it.

Mike W.

-----Original Message-----
From: owner-sl4@sl4.org [mailto:owner-sl4@sl4.org]On Behalf Of Eliezer
S. Yudkowsky
Sent: Tuesday, April 29, 2003 8:13 PM
To: sl4@sl4.org
Subject: Re: Rationalizing death events (was Inifinite universe)

Mike Williams wrote:
>>If, depending on your choice, one person
>>died or a million people died, you'd choose so that only one person died,
>>right? You wouldn't say: "Well, the death event exists either way."
>
> This brings up a question that's been nagging at me for a while. Would an
> FAI make this kind of decision? Assume that the FAI is mature and in
> control of earth's resources.
> 1) If it can act to save a million people by sacrificing one person,
would
> it do that?
> 2) If so, then if it could save a million people by sacrificing 999,999
> people, would it do that?

It's very hard to see a situation where a mature FAI would be faced with
that decision. And my own impulse is to reply: "Of course it would."
The human injunction that 'the ends do not justify the means' guards
against our fallibility and our warped political emotions. Change that,
and what's left is only the lives.

But perhaps an FAI would say differently. I can also see an irreplaceable
value in an FAI not killing anyone, ever, throughout the whole of human
history. I'm just not sure that value is greater than the value of a
human life.

--
Eliezer S. Yudkowsky                          http://intelligence.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT