Friday, September 26, 2003
1 + 1 + 1 + 1 + ... + 1 = 1
Friday afternoon's seminar is "Moral Theory" with Jamie Dreier. Today we were discussing Scanlon's contractualism. For those of you with backgrounds not in moral philosophy, contractualism, as Scanlon understands it, is supposed to be a theory that explains what duties people owe to one another. I don't know enough about contractualism beyond Scanlon to know whether it's fair to criticize some of the things I intend to criticize as characteristics of contractualism, or just of Scanlon's version of it. For safety's sake, let "contractualism" in this post mean "Scanlon's contractualism."
Contractualism is supposed to be an alternative to utilitarianism. I am a utilitarian.
One of the most implausible things about Scanlon's position is his idea that we can't aggregate people's preferences. (Uncoincidentally, this is also one of the most anti-utilitarian things about Scanlon's position.) To oversimplify, Scanlon's view is that an action is wrong if it could be objected to by a reasonable person seeking a consensus among reasonable people.
Scanlon gives (more or less) this example: suppose Bob is working in a broadcast station, overseeing the live television broadcast of a major sporting event. Millions of people are enjoying. There is a malfunction, and Bob ends up being electrocuted by the broadcast equipment. It's very painful. Jack can save Bob, but to do it, he has to unplug the antennae, which will cause millions of people to miss out on the pleasure of the rest of the game. But our moral intuition, says Scanlon, is to save him anyway, and the following argument demonstrates that contractualism predicts this result:
Suppose Jack asks Bob, "Bob, do you want me to leave you to be electrocuted until the game is over?" "No!" says Bob, "I have a reasonable grounds for objection! That would cause me to suffer intensely!" On the other hand, the best reason any of the viewers could give to object to Jack's saving of Bob is, "No, if you turn off the game, I'll miss the end!" Bob's concern clearly outweighs each of the millions of other concerns, and we're not allowed to add them up.
But what an absurd idea it is to not be allowed to add them up! Consider being offered the choice between April suffering two hours of intense torture and a billion other people suffering one hour of intense torture. I think my moral intuition that we should prefer April's suffering is sufficiently mainstream as to be taken for granted. But without aggregating, how could we justify this on contractualist grounds?
"Well, April, is it ok with you if we torture you instead of the billion other people?" "No! I'd have to suffer for two hours!"
"Well, other person #1, is it ok with you if we torture the billion of you instead of April?" "No, then I'd suffer an hour!" "Sorry, other person #1, April's concern outweighs yours."
"Well, other person #2, is it ok with you if we torture the billion of you instead of April?" "No, then I'd suffer an hour!" "Sorry, other person #2, April's concern outweighs yours."
etc.
I find it very surprising that a moral theory would even try to deny aggregation of moral worth. I guess it's because they want to avoid consequences like, "for some number x, it would be morally justified to kill an innocent person in order to prevent x headaches." But that's just obviously true, isn't it? *grin*
I wonder how thoroughly I've alienated my non-philosophy readership (which is, as of now, I believe, 100% of my readership). I generally pride myself on explaining fairly complicated things pretty clearly... of course, it's easier with vocal inflection, hand motions, and response to listeners' facial expression.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment