A couple of weeks ago, Chris Bertram on Crooked Timber
presented a very interesting thought experiment about rational choice and utility theory:
You are in hell and facing an eternity of torment, but the devil offers you a way out, which you can take once and only once at any time from now on. Today, if you ask him to, the devil will toss a fair coin once and if it comes up heads you are free (but if tails then you face eternal torment with no possibility of reprieve). You don't have to play today, though, because tomorrow the devil will make the deal slightly more favourable to you (and you know this): he'll toss the coin twice but just one head will free you. The day after, the offer will improve further: 3 tosses with just one head needed. And so on (4 tosses, 5 tosses, ... 1000 tosses ...) for the rest of time if needed. So, given that the devil will give you better odds on every day after this one, but that you want to escape from hell some time, when should accept his offer?
I recognized that this was a fascinating problem, and presented the following in a comment:
Following is what is surely a bad argument for the conclusion that if I'm a rational agent, for any day, it's not soon enough. Unfortunately, I can't see what's wrong with the argument.
Suppose it's now day k. I could take the chance now, or wait until tomorrow. By choosing to wait until tomorrow, I incur the disutility of an additional day of torture -- but I also gain some finite probability of an infinite utility -- to leave hell. Therefore, this probability should carry greater weight in a judgmentl judgement than the finite day of torture, and I should wait another day.
Of course, if this is right, it suggests that we should NEVER take the devil's offer -- and that's pretty clearly just dumb. I'm not sure what this tells us, other than that this is an interesting question.
In the undergrad course for which I'm grading this semester, we talked last week about Pascal's Wager -- in a nutshell, Pascal argued that a rational self-interested agent should believe in God, because in so doing, he has everything to gain, and very little to lose. Here is a possible more formal reconstruction:
Let A be the world in which I believe in God, and B be that in which I do not believe in God.
Suppose God exists. Then A leads to everlasting bliss, and B leads to eternal damnation. So A is better for me by an infinite amount.
Suppose God doesn't exist. Then there is no afterlife, so B is preferable to A by the cost of believing in God (after all, it's not fun to be virtuous) -- maybe 25 hedon-hours.
Then for any non-zero probability of God's existence, the expected utility from A is greater than that from B -- because it's infinite. So the prudentially rational person will believe in God.
But surely this isn't right. This example is from Felicia Nimue Ackerman, given in class: suppose I'm offered a highly experimental drug, which has a 99% chance of torturing me to death (finite disutility), and a 1% chance of eternal bliss (infinite utility). I wouldn't take the drug, and I'm not inclined to think I'm therefore being irrational.
The drug case, Pascal's Wager, and the bargain with the devil all have in common that they involve comparisons of infinite utility with finite utility. So one possible conclusion is just that infinite numbers just aren't allowed into the expected utility game -- this is rather unsatisfying, though, because I want there to be a correct answer to each of these cases. Another, more drastic, possible solution is that there is no fact of the matter what a rational person would do in general -- personal risk-affinity should be a factor... but this doesn't seem right to me, either.
I'm going to go read Jerry Fodor on acquired perception now.
Post a Comment