Thursday, November 15, 2012

Williamson on Apriority

Here's an argument with the conclusion that there's no deep difference between cats and dogs.
The Dogs and Cats Argument. Although a distinction between cats and dogs can be drawn, it turns out on closer examination to be a superficial one; it does not cut at the biological joints. Consider, for example, a paradigmatic cat, Felix. Felix has the following properties: (i) he has four legs, fur, and a tail; (ii) he eats canned food out of a bowl; (iii) humans like to stroke his back. Now consider a paradigmatic dog, Fido. Fido has all three of these properties as well. For instance, Fido also has four legs, and fur, and a tail, and when he eats, it is often served from a can into a bowl. And humans like to stroke Fido's back, too. In these respects, Fido and Felix are almost exactly similar. Therefore, there can't possibly be any deep biological distinction between them.
I'm sure you'll agree that the dogs and cats argument is terrible. Put a pin in that and consider another argument.

In his contribution to Al Casullo and Josh Thurow's forthcoming volume, The A Priori in Philosophy, Timothy Williamson argues against the theoretical significance of the distinction between the a priori and a posteriori. The thesis of the paper is that "although a distinction between a priori and a posteriori knowledge (or justification) can be drawn, it is a superficial one, of little theoretical interest."

It's a somewhat puzzling paper, I think, because it's not at all clear how it's broad argumentative strategy is supposed to support the conclusion. Williamson does not, for instance, articulate what he takes the apriority distinction to be, then argue that it is theoretically uninteresting. Instead, he identifies certain paradigms of a priori and a posteriori knowledge, then emphasizes various similarities between them. For example, he argues that the cognitive mechanisms underwriting certain a priori judgments are similar in various respects to those that underwrite certain a posteriori judgments. Then he spends most of the rest of the paper arguing that these are not idiosyncratic features of his particular examples. But why is this supposed to be relevant?

Williamson writes:
The problem is obvious. As characterized above, the cognitive processes underlying Norman's clearly a priori knowledge of (1) and his clearly a posteriori knowledge of (2) are almost exactly similar. If so, how can there be a deep epistemological difference between them?
But I do not find this problem at all obvious. The argument at least appears to have the structure of the terrible dogs and cats argument above. The thing to say about that argument is that identifying various similarities between two things does practically nothing to show that there aren't deep differences between them. There are deep biological distinctions between cats and dogs, but they're not ones that you can find by counting their legs or examining how humans interact with them. Similarly, Williamson offers nothing at all that I can see to rule out the possibility that there is a deep distinction between the a priori and a posteriori, but it is not one that is manifest in the cognitive mechanisms underwriting these judgments. For as Williamson himself later emphasizes, there's more to epistemology than cognitive mechanisms. If apriority lives in propositional justification—which is where I think it lives—then there's just no reason to expect it to show up at this psychological level. That doesn't mean it's not a deep distinction.

That Williamson's argument needs to be treated very carefully should also be evident from the fact that prima facie, it looks like it has enough teeth to show that the distinction between knowledge and false belief is not an epistemically deep one—a conclusion that everyone, but Williamson most of all, should reject. For the cognitive processes underlying cases of knowledge are often almost exactly similar to those underlying false beliefs. Should this tempt us to ask how, then, there could be a deep epistemological difference between them? I really don't see why.

Thursday, November 01, 2012

Knowledge and Modals in Consequents of Conditionals


Modals interact in a characteristic way with conditionals. Suppose it’s next Wednesday morning, and I haven’t checked the news in a while. Consider:
  1. Obama probably won the election.
  2. If Romney won Ohio, Obama probably won the election.
Assuming that the last time I looked at the polls, they looked roughly as they do now, (1) is true in my mouth Wednesday morning, and (2) is false. When I say (1), I say something like, ‘most of the epistemically nearest worlds are worlds in which Obama won’. When I say (2), I restrict the worlds I’m looking at: paying attention only to those worlds in which Romney won Ohio, most of the epistemically nearest of them are Obama-winning worlds. I knew going in that the winner of Ohio is likely to win overall, whichever candidate that is. (But I know it’ll probably be Obama.) So (3) is true in my mouth Wednesday morning:
  1. If Romney won Ohio, Romney probably won the election.
Let’s suppose that as a matter of fact, Romney did win Ohio, contrary to my evidence. Still, since I haven’t gotten the bad news yet, my evidence still favors Obama’s having won the election. So when I say (1), it’s true. So is (3). If we look naively, this will appear puzzling. It looks like a counterexample to modus ponens, for the following are all true (not assertable by me Wednesday morning, but true):
  • Romney won Ohio.
  • If Romney won Ohio, Romney probably won the election.
  • Obama probably won the election.
Call the inference from X and a sentence of the form "if X, Y" to Y, naive modus ponens. Naive modus ponens leads us wrong in this case.

The solution to this puzzle, of course, is that modals and conditionals interact in a subtler way than is recorded in the surface grammar of (3). The ‘probably’ modal takes wide scope; “if p, probably q” says that, restricting attention to the p worlds, q is probable. Relatedly, I can’t perform naive modus tollens on my probability conditional: Obama probably won; if Romney won, then Obama probably didn’t win; therefore, Romney didn’t win.

The same goes for ‘might’ and ‘must’. Suppose I have seen election results for every state except Ohio, and I know for certain that the winner of Ohio won the election. Then I may truly say:
  1. If Romney won Ohio, Romney must have won the election.
  2. If Obama won Ohio, Obama must have won the election.
It doesn’t follow from the fact that Romney did win Ohio that I’d express a truth if I said “Romney must have won the election”. Indeed, it’s false—for all I know, Obama might have won. Sentence (4) says that, of all the Romney-winning-Ohio worlds, he wins the election in them.

This is all very different from the way that conditionals interact with non-modal claims. Suppose I truly say to myself:
  1. If the carpenter was here today, the picture is on the wall.
Suppose also that the carpenter was there then (the place and time where I said (6)). This entails that the picture was on the wall. Or if the picture is not on the wall, the truth of (6) entails that the carpenter wasn’t there. In other words, with non-modal consequents, you can perform naive modus ponens and modus tollens on conditionals.

Knowledge patterns with the modals. Suppose you’re trying to decide whether to trust someone. I might truly say:
  1. If he’s lying, you know he’ll just deny everything later.
This can be true even though (a) he is lying, and (b) you don’t know that he’ll deny everything later. For all you know, he’s honest, and will confirm everything. Indeed, you know that you don’t know he’ll deny everything later. But you can’t reason from this known fact and (7) to the conclusion that he isn’t lying. So naive modus ponens and modus tollens are mistakes here, just as in the cases of the obvious modals like might, must, and probably.

I think this is pretty decent evidence in favor of views like mine that treat ‘knows’ as either something a lot like a modal or a literal instance of a modal. I say, broadly with David Lewis, that ‘knows p’ is an evidential quantifier: it says of a given set of worlds that one’s evidence eliminates all the not-p worlds. When it appears in the consequent of a conditional, it’s very natural to restrict the set with the antecedent. So “If X, S knows p” says, first restrict your attention only to the X worlds; S’s evidence eliminates the not-p worlds that remain.