Vance on Relief

1. A Pair of Cases Against Singer: Consider the following case:

Pond Bob is walking near a shallow pond where there are no other pedestrians nearby, and notices a child. Bob is wearing an expensive watch—which, let us assume, is valued at $1,000, would be ruined if it were to get wet, and would take too long to remove before jumping into the water were he to attempt to rescue the child in time to save his life. Planning instead to sell the watch and donate the proceeds to where it will save the lives of five children in a distant nation from starvation, Bob continues his leisurely stroll past the pond as the drowning child dies. Later that day, he sells the watch and donates the proceeds, as planned.

It seems that Bob acts wrongly in this case. He ought to have saved the drowning child. But, now consider another case:

Charity Bob plans to sell his watch, valued at $1000, and donate the proceeds to one of two causes: He can either donate the money to a charity that will save one child in a distant nation from starvation, or he can donate it to a charity that will save five children in a distant nation from starvation (who, let us assume, due to local economic circumstances, can be saved for $200 each, rather than $1000). Bob ultimately decides on the latter charity, and saves the lives of five children.

Here, it seems that Bob does NOT act wrongly. It is at least PERMISSIBLE to neglect to save the one child so that he may instead save the five—and (assuming these are his ONLY two choices) perhaps even obligatory.

But, then, there must be some moral difference between failing to save the one drowning child in Pond, and failing to save the one starving child in Charity. In short, we have an intuitive case for believing that Peter Singer’s argument by analogy is unsound.

2. Causally Relevant Failures: But, what IS the moral difference? To see that, consider another pair of cases:

Faulty Sprinkler Some oily rags are piled near some exposed wire. The rags catch fire. A nearby sprinkler system, designed to activate under extreme heat, malfunctions and fails to put out the fire. The fire consumes the building.

Unbroken Dam Some oily rags are piled near some exposed wire. The rags catch fire. Ten miles upstream there is a dam which, if it breaks, will release a flood of water that will inundate the building where the fire is occurring, putting the fire out. The dam fails to break, and the fire consumes the building.

We often cite failures as being causes of effects (or at least, we admit that they can be causally RELEVANT to effects). Intuitively, we would count the failure of the sprinkler as causally RELEVANT to the fire, but NOT count the dam’s failure to break as relevant.

In the two cases above, we seem to have a CAUSAL difference. It seems that some failures are MORE CLOSELY tied to particular effects than others. And clearly, the failures of agents are like that too. Consider:

Abe at the Pond Abe is standing near a shallow pond as a child drowns, doing nothing to save her. Meanwhile, Bea, who lives on another continent, has never seen or heard of this pond or this child, also does nothing to save the child.

I think that Abe’s failure is causally relevant to the child’s , while Bea’s is not. In other words, Abe’s failure is like the sprinkler’s failure, while Bea’s is like the dam’s failure to break.

In turn, I think that this causal difference makes a MORAL difference. That is, I think that Abe’s failure to save the drowning child in Shallow Pond is causally relevant to that child’s death, while your failure to donate $200 to famine relief is NOT causally relevant to any particular child’s death. And, for this reason, I think that we can legitimately say that Abe has ALLOWED a child to drown, while we cannot say that you have allowed a child to starve (or, you HAVE allowed this, but only in some very weak sense). Thus any moral wrongness that attaches to (strongly) allowing harm does not attach itself to your failure to donate to famine relief.

[Consider: In some sense you are “allowing” all sorts of people to die right now, whom you could have easily saved. After all, there is a possible world where you, say, happen to be in Virginia Beach at the time that someone is drowning – and you are able to save them. Because you didn’t save them, you let them die. But, surely it is not morally wrong to let die all of the people whom we could have saved. What we need, then, is some principled way of distinguishing between the cases of letting die that are prima facie morally wrong (e.g., Abe’s failure) and those that are not even prima facie morally wrong (e.g., Bea’s failure). That’s what I’m trying to provide.]

3. Physical Proximity vs. Causal Proximity: The most common first objection upon hearing Singer’s argument is that it is wrong to allow the child to drown in Shallow Pond only because he is nearby, whereas starving people are very far away.

The implication of this objection is that physical proximity is morally relevant. The most common refutation of this claim is to invoke a case like the following:

Remote Pond You are clicking around on the internet when you happen upon a video feed of a security camera on another continent. The feed shows a drowning child in a pool. You correctly understand that no one else is near the pool or watching the feed, that you have the ability to remotely drain the pool (though doing so will charge your credit card $200), and that this is not a scam. You close your laptop and the child drowns.

Your failure in this case still seems morally wrong, despite the fact that the child in need was very far away. Therefore, physical proximity is not morally relevant.

I agree. But, it is worth noting that physical distance is very strongly CORRELATED with causal distance. When something happens on another continent, it is almost ALWAYS causally remote. Cases like Remote Pond stipulate a kind of situation which is extremely unusual in the real world; namely, a situation where the agent is drawn into the causal bubble surrounding the event (so to speak). In Remote Pond, your failure IS causally relevant to that child’s death. But, physical remoteness generally confers causal remoteness (unless there is something like the video/internet connection in Remote Pond to… connect us). It is no wonder, then, that the physical proximity objection seems so attractive to most people.

But, charities could take a note from Remote Pond and run things as follows:

Extreme Solution There is a charitable organization called ‘It’s All Your Fault’, which assigns one starving child to each affluent person in the world. Charity workers then contact each of these affluent persons to let them know that they have been selected to sponsor a starving child, and that this child will receive life- saving only if their sponsor donates the necessary funds. Otherwise, the child will die. You receive such a letter in the mail, and are notified that Marvin will die without your help. You throw the donation request in the trash, and Marvin dies.

To me, it does seem morally wrong to fail to donate in this case. [Do you agree?]

Should we start running charities in this way? Would it be immoral to do so?

4. Analyzing the Causal Relevance of Failures: I’ve avoiding offering a precise analysis which tells us which failures are causally relevant and which are not. This is the more difficult and messy part of the paper, and (as you probably noticed) it ends up getting me into a lot of trouble. Still, it should be intuitively clear that your failure becomes progressively less and less relevant through the following series of cases:

Life Preserver 1 (LP1) You stand alone at the shallow pond, where one child drowns, and may insert $200 into a machine that will drop one life preserver into the pond near the child.

LP2 You stand alone at the shallow pond, where ten children drown, and may insert $200 into a machine that will drop one life preserver into the pond near the children. However, there is only one life preserver left in the machine.

LP3 You and four other strangers stand at the shallow pond, where ten children drown. You each have $200 in your pockets, and may insert that money into a machine containing five life preservers, which will drop one life preserver into the pond near the children every time it receives $200. There are only 5 left.

LP4 You and four other strangers stand at the shallow pond, where ten children drown. There is a machine containing five life preservers, which will only make one drop of some number of life preservers into the pond before malfunctioning. You and the other four strangers (who each have $200) must pool your money, making one large deposit collectively, and then the machine will deposit up to five life preservers into the pond (one for every $200 deposited).

LP5 You and three billion other affluent strangers stand at the shallow pond, where one billion children drown. There is a machine containing an unlimited number of life preservers, which will only make one drop of life preservers into the pond before malfunctioning. You and the other strangers must pool your money, making one large deposit collectively, and then the machine will deposit some number of life preservers into the pond (one for every $200 deposited).

LP6 You and three billion other affluent strangers live in a world where there is a shallow pond on another continent, where one billion children drown. You and the other strangers may send money whenever you’d like to an organization overseas. The organization collects the money, and then distributes various types of aid however and whenever they see fit. Approximately 10% of the donations are used for expenses such as employee’s salaries and advertising. While some of their projects are more/less cost-effective than others, on average the cost of saving a life is roughly $200.

By LP6, it is simply not true that harm has come to some child because of your failure. That is, there is no particular child, X, of whom it is true that X would have been better off, had you donated. Therefore, there is no legitimate sense in which your failure is causally relevant to any particular harms. So, you do not allow harm by not donating.

[In the paper, I point out that the standard analysis says that some failure, F, is causally relevant to some effect, E, if and only if:

(1) If F occurs, E occurs. (i.e., in many of the nearby worlds where the failure occurs, the effect also occurs) (2) If F does not occur, E does not occur. (i.e., in many of the nearby worlds where the failure does not occur, the effect also does not occur)

However, in this case, the failure of the dam to break is causally relevant to the house fire (for if the dam doesn’t break, the fire occurs ; and if the dam does break, the fire does not occur). So too would Bea’s failure to save the drowning child on the other side of the world be causally relevant to that child’s death (for, if she doesn’t rescue him, he dies ; and if she does rescue him, he lives). That won’t do! A good analysis will render both the dam’s failure and Bea’s failure as NOT causally relevant (to the fire, or to the child’s death). So I add a third condition:

(3) There are many NEARBY possible worlds where F does not occur.

It seems that there are no NEARBY worlds where the dam DOES break. Similarly, there are no NEARBY possibilities where Bea saves the drowning child. A lot about the actual world would have had to have been different in order for those possibilities to obtain.

But, now consider the way that global charities are presently structured. What happens in the nearby worlds where I donate? Or in nearby worlds where I do not? While it is true that, in SOME of the worlds where you donate, fewer children will die, in some of them MORE children die—since in some of those worlds, the donations are used differently. Furthermore, even in worlds where fewer children die, the identity of WHICH particular child is saved by your donation also varies across possibilities.

Put simply, defenders of Singer claim that, by failing to donate:

(a) You allowed harm to come to some child.

But, this is true only if:

(b) Your failure to donate is causally relevant to the state of affairs of some child’s being worse off than s/he otherwise would have been.

But, this is true only if:

(c) In all/most of the nearby possible worlds where you donate, there is some child who is better off than s/he is in all/most of those possible worlds where you do not donate.

And (c) is false. Therefore, neither criteria (1) nor (2) are met, and so your failure to donate is not causally relevant to any particular child’s death.

…Still, charities needn’t be structured in this “causally amorphous” way; e.g., if you had chosen to specifically sponsor one child, that specific child WOULD have been better off; e.g., both (1) and (2) are TRUE if you do not donate in Extreme Solution.

If this is pressed, I must lean on criterion (3), claiming that perhaps our failure to donate is not causally relevant to any particular child’s death even if (1) and (2) are met, so long as there are not many NEARBY worlds where you DO donate.

This addition gets me into a lot of trouble. For instance, there are no nearby worlds where psychopaths help people; or where people immersed in morally bankrupt cultures (e.g., cannibals) help people; etc. This entails that, when a psychopath stands by and watches someone drown, their failure to help is not causally relevant to that death (and they therefore have not acted wrongly—or, at the very least, they are not blameworthy. That may seem absurd to some readers.

I largely bite the bullet here, noting that ought implies can. If helping others is only psychologically remote for you, then you “can” help others in only a very WEAK sense. Therefore, it follows that your OUGHT to help them in only a very weak sense. The problem is that, on my view, it may turn out that, the more of a jerk you are, and the less empathy you have, the less wrong it is for you to allow kids to starve to death.]