Friday, March 19, 2004

Ideal Agent Theories

The following is sometimes suggested as a normative ethical theory: "A ought to do x in situation S just in case B would do x in situation S, where B is a perfectly moral agent." The most obvious problem with a theory like that is a Euthyphro-type response: what makes B a perfectly moral agent? I'm interested in another problem with theories like this. First, another "theory like this". This is from Peter Railton, discussed yesterday in Jamie Dreier's metaethics class. Railton is looking to define a person's best interests. He suggests something like this: "It is in A's interest to do x in situation S just in case it would be in A+'s subjective interest to do x in situation S, where A+ is what A would be if he had unlimited cognitive and imaginative abilities." Although on a first pass, this suggestion seems intuitively plausible, there are some trivial-looking counterexamples which are very difficult to write out. The counterexamples focus on the difference between A+ and A. So, for instance, suppose I am A, and the situation I'm currently in is S. I'm considering whether to spend time talking about philosophical issues with my colleagues*. Suppose, for the sake of argument, that my philosophical insight and skill is not absolutely perfect. In this case, it might very well be in my best interest to spend time talking to others about philosophy, because it might help me to become a better philosopher, which is one of my goals. But it wouldn't be in A+'s subjective best interest to do that -- A+ would correctly reason, "I already have unlimited cognitive and imaginative abilities, which means I'm the best philosopher imaginable. So talking to other philosophers will not make me a better philosopher." So we attempt to patch up the account: "It is in A's interest to do x in situation S just in case it would be in A+'s subjective interest for A to do x in situation S." I'm not sure how to make sense of this formulation. The following looks like it might be a counterexample:
Bubba's IQ is 95. Bubba is confronted with the choice of whether or not to press the big red button. The big red button, if pressed, would have the following result: God will empty the bank account of every person whose IQ is higher than 100, and give the money to those whose IQ is lower than 100.
Clearly, it is in Bubba's interest to press the button. (Assume money is good.) I don't know how to evaluate the proposed biconditional, though. Is it the case that it would be in Bubba+'s subjective interest for Bubba to press the button? I don't know. Bubba+, who has an IQ much higher than 100 would, if he existed, be worse off if the button got pressed. Then again, Bubba+ doesn't exist -- we could think of him as a fictional character, but that won't help; fictional characters only have fictional interests, and there's no reason that Bubba+ should fictionally have an interest in the button being pressed (either in the real world or the fictional world). Jamie's response to this kind of suggestion was to push the idea that Bubba+ is the same person as Bubba. Well, ok... but he's still different in an important way. And it still seems like it would be in Bubba's interest to press the button, but not in Bubba+'s. So that looks like a problem. But I recognize that I'm pretty confused about the argument. *Do fellow graduate students count as colleagues? Or do I have to have a job before I can have colleagues?

No comments:

Post a Comment