A couple days ago, when I
posted on contractualism, I half-seriously made the following remark:
I find it very surprising that a moral theory would even try to deny aggregation of moral worth. I guess it's because they want to avoid consequences like, "for some number x, it would be morally justified to kill an innocent person in order to prevent x headaches." But that's just obviously true, isn't it?
I was half-serious in the sense that while I do believe that the quoted claim is true, I do not genuinely believe it to be
obviously true. Neither, apparently, do many of you.
Some interesting comments on that post:
Joshua said:
Look at it this way:
1) It's wrong to kill 1 person to prevent 1 headache
2) If it is wrong to kill 1 person to prevent X headaches, it's wrong to kill 1 person to prevent X+1 headaches.
How then do you get to "There exists an X such that it is right to kill 1 person to prevent X headaches" in a reasonable, principled fashion?
Well, the best response to most instances of the
Sorites paradox is to deny the generalizing step (2). I'm fully willing to embrace the moral fact that it's wrong to kill one innocent person in order to prevent one headache (indeed, if it were otherwise, suicide pills would replace aspirin). But where's the justification for the second claim?
I admit that (2) sounds plausible -- but I say it's false. A consequentialist ought to recognize that while one headache is bad, two headaches is worse. And death is even worse. Furthermore, the difference between death and two headaches is smaller than the difference between death and one headache.
Dave said:
I wonder if the solution might involve dividing suffering into classes - different levels across which it is not meaningful to compare. There cannot be some number x where x headaches override one murder, because murder is in a more intense class of suffering. We recognize that one murder is so bad that we're willing to accept *any number* of headaches in order to prevent it.
This is what people in philosophy refer to as a
lexical ordering (think in terms of a dictionary, where
all the A-words come before
all the B-words, etc.). And yes, people do try to hold this position in terms of consequentialist ethics. This idea intuitively sounds right -- it allows us to hold on to our intuition that murder is worse than
any number of headaches -- but I suggest it loses all intuitive force once it's recognized that there are levels of harm between headaches and murders.
It's pretty clear to me that it's possible, in principle, to come up with a near-continuum of wrongs, from the infliction of a headache up to murder, with a very long series of slightly-worse things you could do to a person. It just doesn't seem reasonable to point, in any principled way, to one of the tiny gaps and say "that's the one where it's worse to have a gazillion of the slightly less bad harm than one of the slightly more bad one."
Here's
A Second Way of Thinking About Things That Demonstrates That I'm Right. Catchy title, huh?
(This example is from Alistair Norcross, see below.)
Suppose I have a headache, and no pain-killers in my house. I'm considering going out and driving to the drug store to buy the means to cure my headache. Suppose further that I know as an empirical fact that going on a 5-minute drive increases my chances of dying in a car accident by some non-zero percentage -- maybe it increases my chances of dying by one in a billion. Surely I'm not being irrational to risk my life, just for a headache! This is meant to demonstrate that while the harm of death and the harm of headache differ severely in degree, they do not differ in principle, and it is reasonable to trade one for an appropriate amount of the other.
Follow-Up: Alistair Norcross, one of my undergraduate professors at Rice University and a major influence on my ethical theorizing, has written a fair amount about these issues. Check out "
Great Harms from Small Benefits Grow: How Death can be Outweighed by Headaches" and "
Comparing Harms: Headaches and Human Lives". Both are short and easy to read. Remember, I'm heavily-influenced by him, so don't be surprised when he says the things I did.
(As an interesting side-note, I've learned something interesting over the years about debate in general. I'd always thought that when intelligent people disagreed about something, they argued about it until one of them was clearly right, then the walked away in agreement. Since then, I've realized that arguments are often won long after they're over -- for much of Alistair's Consequentialism class last fall, I took myself to be successfully refuting Consequentialism. And now I'm sitting here, giving you the arguments he gave me.)