Tuesday, March 16, 2004

Taking one box is like voluntarily smacking oneself really hard in the head.

Allan has a post on Newcomb's Paradox. So does Matt Weiner. And now, so do I. Here's the scenario:
An extremely reliable predictor of human behavior places two boxes -- a clear one and a black one -- on the table. He's going to give you the choice between taking (1) just the black one, or (2) both of them. But he's already predicted which choice you'll take. And if he predicted that you'll take just the black one, then he put a million dollars under it. If he predicted that you'll take both of them, then he put nothing under it. Either way, he also puts $1000 under the clear one. Which choice do you take?
I, who am a two-boxer, reason like this, which seems perfectly obvious to me: no matter what he's predicted, I'm better off, by $1000, taking both. That's because there either is a million dollars or there is not a million dollars in the black box, which means there is a total of either $1000 or $1,001,000 on the table, and I don't get to pick which. So I should take all of the money on the table. Allan, who is a one-boxer, offers this argument for one-boxism (Matt, who is also a one-boxer, has not told us why he is, but maybe it's for a reason like this):
P1: There is a strong probabilistic connection between taking one box and getting a million dollars (i.e. 99% of people who take one box get a million dollars). P2: There is a strong probabilistic connection between taking two boxes and getting a thousand dollars (i.e. 99% of people who take two boxes get a thousand dollars). C: I should take one box (if I want more money).
Even if the premises are correct, I think that the argument is invalid. Once the predictor has done his predicting, Allan just doesn't have any say in the matter of whether he gets that million. The "strong probabilistic connection" is not causal. I think that the underlying structure of the probabilistic connection actually looks like this:
  • There is a strong probabilistic connection between taking one box and being the kind of person who takes one box.
  • There is a strong probabilistic connection between being the kind of person who takes one box and being predicted by the predictor to take one box.
  • There is a (perfectly) strong probabilistic connection between being predicted by the predictor to take one box and getting more money.
Once we lay it out this way, we see that the second line, not the first one, is what makes one-boxers well-off. So it would be rational, perhaps, to try to become the kind of person who would take one box (if predictors routinely offered such incentives), but that doesn't mean it'd be rational to do it, once the prediction had already been made. Here is a case which I allege to be parallel: Suppose that the Friends of Lithuanian-Americans Society has decided it wants to do the following for the benefit of Lithuanian-Americans: representatives will go around town, and for each person they find, they will look up that person's name in the Lithuanian-American persons' registry. If the person is on the list, then the Society will give him a million dollars. If he isn't, then they won't. The following fact is true about Lithuanian-Americans: Lithuanian-Americans very often* have a rare mental disorder which causes them to smack themselves, very hard and very often, in the head. Almost* no one else has this mental disorder. You've heard the public service announcements in which the FLAS explained their program and the criterion for deciding whether to give the money. You've observed FLAS members going around town, and you've noticed the following interesting thing: every time the interviewee smacks himself really hard in the head, after the FLAS member finishes checking his book, he gives the interviewee a million dollars. And every time the interviewee does not smack himself really hard in the head, after he checks the book FLAS does not give the interviewee a million dollars. This, of course, is easily explained by the high correlation between Lithuanian-Americanism and head-smacking, of which you're aware. "Interesting," you might say to yourself. "There is a high statistical correlation between head-smacking and receiving a million dollars!" You'd be correct about that. But you'd be incorrect if you went on to reason, "when they come and interview me, I should smack myself really hard in the head!" Assuming that money is good and getting smacked in the head is bad, then insofar as you have a choice whether to smack yourself in the head, it is obviously not rational to smack yourself on the head. There is no causal connection from head-smacking to money-receiving, just as there isn't one from taking one box to receiving a million dollars. In general, arguments of the type "there is a high correlation between xing and something good" are insufficient to conclude that you have a reason to x. *The argument goes through even if we make all Lithuanian-Americans and no one else have the disorder.

No comments:

Post a Comment