A reader named pathos made the following objection to my remarks about utilitarian moral theory:
More generally, the common line of reasoning in ethics is troubling. The arguments tend to be (1) X strikes one's moral intuition as immoral; (2) Let us make a line of reasoning leading to the incontrovertable conclusion that Y strikes one's moral intuition as moral; (3) See how Y is structurally identical to X, therefore (4) X is moral.
By deferring to our moral intuitions in their arguments, the utilitarians have essentially conceded the point before the game has started. Yes, I believe Y is moral, but I simultaneously feel that logically indistinguishable X is immoral. And since morality is based on psychology and intuition, not on any great holistic theory, there is nothing wrong with that.
This is a good lead-in to a
Philosophy Buzzword post I've been meaning to write for a few days. In fact, reflective equilibrium was the concept I had in mind when I first decided to start a series like this. As a reminder, I mean to be writing generally for a non-philosophy audience, but I also mean to be reporting uncontroversial facts about how philosophy is done. If you know better than I, and I'm wrong, please tell me.
In many realms of philosophy, intuition plays a central role. Ethics is a good example -- we construct ethical theories based on the idea that our intuitions are correct. But, as pathos point out, on the surface, that just doesn't look like a productive way to go about constructing a theory. We assume intuitions are right, and use that to prove that some of our intuitions are wrong? Philosophers do (at least claim to) have an answer.
Reflective Equilibrium is, I think, the mainstream method for choosing a theory to account for intuitions. It's not unchallenged as a source of justification, but I think I can accurately describe it was a tool that most philosophers who deal with these issues use. I will stick to the ethics example here because ethics is both a good example and interesting, but reflective equilibrium can be used for many different kinds of questions as well.
Very roughly speaking, here's the idea: you want to have a theory of ethics, and by the time you're done, you want it to be consistent with all your ethical beliefs. This may involve rejecting some of your intuitions, because some of your intuitions might not get along well together -- either because they directly contradict one another, or because no plausible theory could accommodate both.
Here's the method: in order to come up with a theory of morality, you start with some "raw" moral intuitions. Raw moral intuitions are intuitions about actions in specific cases. "Yesterday when my mom said 'good morning', it would have been morally wrong to shoot her," etc. The next step is to come up with a possible theory to explain most or all of these intuitions. The point of a theory, of course, is to generalize beyond the specific intuitions you've already looked at. Maybe you come up with the following as a piece of your theory: "it's always morally wrong to kill people."
But you're not done yet -- probably not by a long shot. Your theory covers cases that you haven't consulted your intuitions about yet. So now you want to think of possible problems with your theory -- can you think of any counterexamples? "Well, I think it'd be ok to kill someone if he were trying to kill me." Or maybe, "I think Buffy was morally justified in killing Angel, because that was the only way to stop the universe from being sucked into a hell dimension." Now you have a tension between your theory and your specific intuitions, and you have to decide which will give way to the other. At this point, I think that the odds are good that you'll choose to modify your theory, rather than conclude that it is actually morally wrong to kill in self-defense. So you might add in principled exceptions to your theory, or you might re-frame it altogether.
What factors do we consider when weighing theory versus raw intuition? Here are a few suggestions:
- Strength of intuition (some intuitions are stronger than others, and will therefore weigh more heavily).
- Simplicity of theory.
- Perceived reliability of intuition (perhaps we should trust our intuitions less in some cases, such as when we have a particular emotional attachment to an issue).
- Consistency of intuition (if we have intuitions that contradict one another, that casts doubt on both, and we'd better reject at least one).
- etc.
We achieve reflective equilibrium when we stop altering our theory -- that is, every intuition is either confirmed by the theory or rejected as wrong.
One consequence to recognize is that our theory, once it's been decided upon will give us guidance with regard to situations in which our intuitions are unclear. Maybe you don't have a decisive moral opinion as to whether it's permissible to cheat on your taxes, or eat animals. Once you decide on a theory, grounded in the cases you do have moral intuitions about, that theory can guide you in the less clear cases.
I believe that the method of reflective equilibrium, properly understood, does have a strong appeal, at least on the surface. A close parallel can be drawn with the method by which scientists come up with theories to explain experimental data -- observations ground theories, and surprising observations either modify the theories or are discounted as experimental error.
The important point behind reflective equilibrium, I think, is the recognition that our intuitions are not the last word -- a solid, attractive theory which accounts for most intuitions may very well justify the rejection of some intuitive beliefs.
There are important questions I'm not addressing here. Two of what I think are the most pressing and interesting ones are (1) how large a set of beliefs should we be considering when engaging in reflective equilibrium? and (2) why should we give intuitions any weight in forming theories at all? If you want to read something much more in-depth about reflective equilibrium from a much more competent authority than I, I recommend Norman Daniels's
"Reflective Equilibrium" in the Stanford Encyclopedia of Philosophy.
No comments:
Post a Comment