Thursday, November 15, 2012

Williamson on Apriority

Here's an argument with the conclusion that there's no deep difference between cats and dogs.
The Dogs and Cats Argument. Although a distinction between cats and dogs can be drawn, it turns out on closer examination to be a superficial one; it does not cut at the biological joints. Consider, for example, a paradigmatic cat, Felix. Felix has the following properties: (i) he has four legs, fur, and a tail; (ii) he eats canned food out of a bowl; (iii) humans like to stroke his back. Now consider a paradigmatic dog, Fido. Fido has all three of these properties as well. For instance, Fido also has four legs, and fur, and a tail, and when he eats, it is often served from a can into a bowl. And humans like to stroke Fido's back, too. In these respects, Fido and Felix are almost exactly similar. Therefore, there can't possibly be any deep biological distinction between them.
I'm sure you'll agree that the dogs and cats argument is terrible. Put a pin in that and consider another argument.

In his contribution to Al Casullo and Josh Thurow's forthcoming volume, The A Priori in Philosophy, Timothy Williamson argues against the theoretical significance of the distinction between the a priori and a posteriori. The thesis of the paper is that "although a distinction between a priori and a posteriori knowledge (or justification) can be drawn, it is a superficial one, of little theoretical interest."

It's a somewhat puzzling paper, I think, because it's not at all clear how it's broad argumentative strategy is supposed to support the conclusion. Williamson does not, for instance, articulate what he takes the apriority distinction to be, then argue that it is theoretically uninteresting. Instead, he identifies certain paradigms of a priori and a posteriori knowledge, then emphasizes various similarities between them. For example, he argues that the cognitive mechanisms underwriting certain a priori judgments are similar in various respects to those that underwrite certain a posteriori judgments. Then he spends most of the rest of the paper arguing that these are not idiosyncratic features of his particular examples. But why is this supposed to be relevant?

Williamson writes:
The problem is obvious. As characterized above, the cognitive processes underlying Norman's clearly a priori knowledge of (1) and his clearly a posteriori knowledge of (2) are almost exactly similar. If so, how can there be a deep epistemological difference between them?
But I do not find this problem at all obvious. The argument at least appears to have the structure of the terrible dogs and cats argument above. The thing to say about that argument is that identifying various similarities between two things does practically nothing to show that there aren't deep differences between them. There are deep biological distinctions between cats and dogs, but they're not ones that you can find by counting their legs or examining how humans interact with them. Similarly, Williamson offers nothing at all that I can see to rule out the possibility that there is a deep distinction between the a priori and a posteriori, but it is not one that is manifest in the cognitive mechanisms underwriting these judgments. For as Williamson himself later emphasizes, there's more to epistemology than cognitive mechanisms. If apriority lives in propositional justification—which is where I think it lives—then there's just no reason to expect it to show up at this psychological level. That doesn't mean it's not a deep distinction.

That Williamson's argument needs to be treated very carefully should also be evident from the fact that prima facie, it looks like it has enough teeth to show that the distinction between knowledge and false belief is not an epistemically deep one—a conclusion that everyone, but Williamson most of all, should reject. For the cognitive processes underlying cases of knowledge are often almost exactly similar to those underlying false beliefs. Should this tempt us to ask how, then, there could be a deep epistemological difference between them? I really don't see why.

Thursday, November 01, 2012

Knowledge and Modals in Consequents of Conditionals


Modals interact in a characteristic way with conditionals. Suppose it’s next Wednesday morning, and I haven’t checked the news in a while. Consider:
  1. Obama probably won the election.
  2. If Romney won Ohio, Obama probably won the election.
Assuming that the last time I looked at the polls, they looked roughly as they do now, (1) is true in my mouth Wednesday morning, and (2) is false. When I say (1), I say something like, ‘most of the epistemically nearest worlds are worlds in which Obama won’. When I say (2), I restrict the worlds I’m looking at: paying attention only to those worlds in which Romney won Ohio, most of the epistemically nearest of them are Obama-winning worlds. I knew going in that the winner of Ohio is likely to win overall, whichever candidate that is. (But I know it’ll probably be Obama.) So (3) is true in my mouth Wednesday morning:
  1. If Romney won Ohio, Romney probably won the election.
Let’s suppose that as a matter of fact, Romney did win Ohio, contrary to my evidence. Still, since I haven’t gotten the bad news yet, my evidence still favors Obama’s having won the election. So when I say (1), it’s true. So is (3). If we look naively, this will appear puzzling. It looks like a counterexample to modus ponens, for the following are all true (not assertable by me Wednesday morning, but true):
  • Romney won Ohio.
  • If Romney won Ohio, Romney probably won the election.
  • Obama probably won the election.
Call the inference from X and a sentence of the form "if X, Y" to Y, naive modus ponens. Naive modus ponens leads us wrong in this case.

The solution to this puzzle, of course, is that modals and conditionals interact in a subtler way than is recorded in the surface grammar of (3). The ‘probably’ modal takes wide scope; “if p, probably q” says that, restricting attention to the p worlds, q is probable. Relatedly, I can’t perform naive modus tollens on my probability conditional: Obama probably won; if Romney won, then Obama probably didn’t win; therefore, Romney didn’t win.

The same goes for ‘might’ and ‘must’. Suppose I have seen election results for every state except Ohio, and I know for certain that the winner of Ohio won the election. Then I may truly say:
  1. If Romney won Ohio, Romney must have won the election.
  2. If Obama won Ohio, Obama must have won the election.
It doesn’t follow from the fact that Romney did win Ohio that I’d express a truth if I said “Romney must have won the election”. Indeed, it’s false—for all I know, Obama might have won. Sentence (4) says that, of all the Romney-winning-Ohio worlds, he wins the election in them.

This is all very different from the way that conditionals interact with non-modal claims. Suppose I truly say to myself:
  1. If the carpenter was here today, the picture is on the wall.
Suppose also that the carpenter was there then (the place and time where I said (6)). This entails that the picture was on the wall. Or if the picture is not on the wall, the truth of (6) entails that the carpenter wasn’t there. In other words, with non-modal consequents, you can perform naive modus ponens and modus tollens on conditionals.

Knowledge patterns with the modals. Suppose you’re trying to decide whether to trust someone. I might truly say:
  1. If he’s lying, you know he’ll just deny everything later.
This can be true even though (a) he is lying, and (b) you don’t know that he’ll deny everything later. For all you know, he’s honest, and will confirm everything. Indeed, you know that you don’t know he’ll deny everything later. But you can’t reason from this known fact and (7) to the conclusion that he isn’t lying. So naive modus ponens and modus tollens are mistakes here, just as in the cases of the obvious modals like might, must, and probably.

I think this is pretty decent evidence in favor of views like mine that treat ‘knows’ as either something a lot like a modal or a literal instance of a modal. I say, broadly with David Lewis, that ‘knows p’ is an evidential quantifier: it says of a given set of worlds that one’s evidence eliminates all the not-p worlds. When it appears in the consequent of a conditional, it’s very natural to restrict the set with the antecedent. So “If X, S knows p” says, first restrict your attention only to the X worlds; S’s evidence eliminates the not-p worlds that remain.

Wednesday, October 31, 2012

Sider, structure, reduction, and knowledge first


This is a continuation of yesterday's post. Yesterday I identified what seemed to me to be a problem for the way that Ted Sider wanted to explain why various macro-level things are more privileged—basically, they're said to be more joint-carving. But as I said yesterday, I just don't see that they are.

I suspect, however, that one can get something quite a bit like Sider's picture here if one is willing to add a bit more structure. The prospects for a reasonable purely physical story about chemical objects and properties seem reasonable. It doesn't seem hopeless to try to give a reasonably simple definition of molecule or magnesium or valence in purely physical terms. So if we assume the (absolute) fundamentality of physics, we can run the story Sider wants for why we refer to molecules instead of molecules-or-cucumbers, or even molecules-before-2013-and-regions-of-space-afterwards, because the physical definition of molecule is significantly simpler than these more bizarre properties. (In the former case, a purely physical definition will be insanely complex, as in the case of pig; in the latter, it will still be not insanely complex, but more complex than that for molecule.)

This is basically just a way of expressing the familiar idea that chemistry somehow reduces to, or emerges from, physics. But if we buy into Ted's general ideas, we can add this: it is part of the objective structure of the world that chemical properties are related to physical properties in this way. The 'book of the world' will give us the chemical on top of the physical (and the chemical is objectively privileged over the schmemical).

Now what happens when we go up another level? It's pretty natural to suppose that cell biology relates to chemistry as chemistry does to physics. So --- and here's where the picture I'm describing departs from Ted's --- when adjudicating between which objects we refer to in our discourse about cell biology, objects with reasonably simple definitions in chemical terms --- not physical terms --- are privileged over ones that don't. We don't always go back to the most fundamental; we just go back to the more fundamental domain that is appropriate for the matter at hand. Often, but not always, the simpler definition is the more fundamental theory will correspond to the simpler definition in the ultimately fundamental theory; when it doesn't, I think we should go with the less fundamental one. (It's having a better chemical account that makes a particular referent of 'cell' the preferred one, not having a better physical account.)

The reason I'm interested in this, besides the fact that it's interesting, is that I'm leaning in this kind of a direction as a way of making sense of what the 'knowledge first' attitude is. (Yes, I'm reading metaphysics, but it's in the service of epistemology, I swear!) I understand it as a metaphysical thesis: knowledge is a more fundamental state than has been traditionally recognized. In the terms of this broad way of thinking about theorizing about the world, knowledge shows up at a more fundamental level than one might have thought.  (Compare a 'life first' theorist, who thinks that the attempt to define life in biological terms is a mistake; life's home is really at the chemical level—we need to invoke life to understand, say, combustion.) How early should knowledge appear? Presumably, people could differ about this. If you wanted to, you could think that knowledge was perfectly fundamental; knowledge is as basic as quarks or whatever. You'd oppose any kind of reduction of knowledge to anything. This doesn't sound very plausible, but you could say that if you wanted to. My suspicion is that knowledge will be an important theoretical term from the basics of intentional psychology.

Tuesday, October 30, 2012

Sider on joint-carving and reference

Humans refer to things sometimes. Ted Sider thinks, with David Lewis, that part of the story for why it is that we refer to some things, rather than other possible things, is that the things we refer to are more natural. This Sider understands as a matter of the primitive structure of the world. To takes one of Ted's examples, Ted's word 'pig' refers to pigs, instead of pigs-before-2011-AD-or-cows-afterwards. And he thinks that general considerations about fundamental structure can yield this intuitive result. Ted writes:

The point may be seen initially by making two strong, crude assumptions about "reasonably joint-carving". Assume first that a notion is reasonably joint-carving iff it has a reasonably simple and nondisjunctive definition in terms of the perfectly joint-carving notions, and second that the perfectly joint-carving notions are those of physics. Then surely no reasonably joint-carving relation that is to play the role of reference could relate a human population to bizarre semantic values. For the bizarre semantic values themselves have no simple basis in the physical, nor do they stand in any physically simple relations to human populations. Given any reduction that does relate us to bizarre semantic values, there is surely some other relation with a simpler basis in the physical that relates us to nonbizarre semantic values. (29)
There is considerable vagueness and imprecision in the notion of "reasonably" simply definitions Ted evokes, but I guess I agree that it's pretty plausible that one couldn't tell a "reasonably simple" story in purely physical terms of how humans are related to bizarre semantic values like pigs-before-2011-AD-or-cows-afterwards. But Ted needs more than just that fact; he needs the comparison. And I guess it just doesn't look very plausible to me that there is a "reasonably simple" definition available in purely physical terms of any of the pieces we need here. By any ordinary standards, a definition of pig --- or human! --- in purely physical terms will be rather unreasonably complex! So I worry that if this is the story about why we don't refer to pigs-before-2011-AD-or-cows-afterwards, it will generalize to show that we don't refer to pigs either. (A related problem; surely it's possible to refer to pigs-before-2011-AD-or-cows-afterwards, right?)

I think this problem persists, even given the less toy version of the theory. He continues the passage above:
The two assumptions of the previous paragraph are undoubtedly too crude, but the point is independent of them. Whether a notion is reasonably joint-carving --- enough to take part in special-science explanations --- has something to do with how it is based in the fundamental. So reference must have the right sort of basis in the fundamental if it's to be explanatory.
But it's just very difficult to say anything halfway reasonably simple about how any of this stuff arises from the fundamental. Things like pigs are just way too far removed from things like electrons. (And presumably, even electrons aren't fundamental anyway.)

More on this theme, and what I think we should say instead, tomorrow.

Wednesday, October 24, 2012

Sider on epistemic value and nature's joints

Ted Sider thinks that it's epistemically preferable to think in joint-carving terms; this is a way of better matching one's beliefs to the world. While something about that sounds right, I think that some of the the things he says must go too far. He writes, for example, that
[j]oint-carving thought does not have merely instrumental value. It is rather a constitutive aim of the practice of forming beliefs, as constitutive as the more commonly recognized aim of truth. (WtBotW p. 61)
I don't think this can be right. The idea of somebody forming beliefs without any kind of sensitivity or regard for whether they are true is incoherent; this is not so for someone who doesn't care whether her beliefs carve nature at the joints. Suppose one is charged with failing to carve at the joints with her beliefs, and replies flippantly --- so what? --- and maintains her previous beliefs? She might be criticizable on epistemic grounds, but her attitude is comprehensible, even if we do not approve of it. Compare the person who is charged with having false beliefs, and replies in the same way --- indifferently accepting the charge, and continuing to believe as before. This isn't just epistemically vicious; this runs counter to what it is to be a belief. In other words, a truth aim has a better claim to a constitutive connection to belief than a joint-carving aim does.

Here is another difference that should not be overlooked: some instances of non-joint-carving beliefs are absolutely correct to hold. Maybe they're not as good as their joint-carving cousins, but one needn't choose between them. You can believe that the emerald is green and that it is grue. In fact, that's exactly what you should do. And you shouldn't feel at all epistemically deficient for having the latter belief. Compare this to false beliefs: every false belief you have prevents you from having a true one.

Moore-paradoxes show a deep connection between belief and truth; there is a deep incoherence in the idea of accepting: "I believe that p, even though not-p." But there is no corresponding incoherence in "I believe that p, even though the terms in p do not carve at nature's joints."

Whatever epistemic value attaches to joint-carving, it is less central to belief than truth is.

Wednesday, October 17, 2012

Joint-Carving and Projectability

So this weird thing has been happening to me lately where I think about metaphysics. My current symptom is an attempted negotiation of Ted Sider's recent book, Writing the Book of the World. Ted's Big Idea is that there is objective structure to reality, and that this structure is really important for all kinds of reasons; I find the general picture a pretty attractive one, but I'm rather puzzled by some of his remarks on the applicability of structure to induction and confirmation.

Ted writes:
Which observations confirm a generalization 'all Fs are Gs'? A natural answer is the "Nicod principle": observations of Fs that are Gs confirm 'All Fs are Gs'. But suppose that an observation confirms any logical equivalent of any sentence that it confirms. Then, as Hempel pointed out, the observation of red roses confirms 'All ravens are black' (given the Nicod principle it confirms 'All nonblack things are nonravens', which is logically equivalent to 'All ravens are black'.) And as Goodman pointed out, Nicod's principle implies that observations of green emeralds before 3000 AD confirm 'All emeralds are grue' (sice green emeralds observed before 3000 AD are grue.) But anyone who believed that all emeralds are grue would expect emeralds observed after 3000 AD to be blue.
[This conclusion] can be avoided by restricting Nicod's principle in some way -- most crudely, to predicates that carve at the joints. Since 'is nonblack', 'is a nonraven', and 'grue' fail to carve at the joints, the restricted principle does not apply to generalizations containing them. In Goodman's terminology, only terms that carve at the joints are "projectable". (35)
I'm confused about this strategy. Ted says we can avoid the conclusion that nonblack nonravens confirm ravens' blackness because 'nonblack' and 'nonraven' don't carve at the natural joints. But 'nonblack' carves at exactly the same joint as 'black' does -- to push the metaphor only slightly further, it's the very same cut. So if 'nonblack' isn't joint-carving, and is therefore nonprojectable, then it looks like just the same would go for 'black', and mutatis mutandis for ravens and nonravens. So now it looks like I can't confirm that all ravens are black by observing black ravens. This isn't the result Ted wanted, surely.

So I'm worried that one of these things must be true:

  1. The story quoted above about why red roses don't confirm that all ravens are black is wrong;
  2. Black ravens don't confirm that all ravens are black; or 
  3. The 'black' joint is natural, but the 'nonblack' joint isn't.
When I asked Carrie about this, she suggested that Ted might be intending something like (3) here. After all, she pointed out, there might be more of an objective sense in which all black things resemble each other than that in which all non-black things do. I guess the thought would be that the metaphor is breaking down here; 'joint-carving' isn't the right term. Maybe this is right, but I didn't see that Ted could go this way, since he doesn't want to take objective similarity as the most fundamental thing. He wants structure to be most fundamental, and to explain objective similarity in terms of structure.


I feel like I must be missing something obvious here, but I can't see what it is. Somebody help?

Tuesday, October 16, 2012

Christoph Jäger on knowledge, contextualism and assertion


Christoph Jäger argues in a recent Analysis paper that contextualism and the knowledge norm of assertion are jointly untenable. Here’s my reconstruction of Jäger’s argument. (My numbering will differ from his.) Suppose that Keith has hands and the usual epistemic access to them. Also, Keith is a contextualist. For some two contexts, C-LOW and C-HIGH,
  1. In C-LOW, “Keith knows that he has hands” is true.
  2. In C-HIGH, “Keith knows that he has hands” is false.
  3. In C-HIGH, Keith may appropriately state his contextualist theory.
  4. In C-HIGH, “Keith knows his contextualist theory” is true.
  5. In C-HIGH, “Keith knows that in C-LOW, ‘Keith knows that he has hands’ is true” is true.
  6. In C-HIGH, “Keith knows that he has hands” is true.
But (6) contradicts (2), so something is wrong. Jäger says that it’s either contextualism or the knowledge norm that has to go. The first two premises are standard contextualist fare. A modest closure principle is invoked in the step from (5) to (6)—but it looks fine. The move from (3) to (4) follows from Jäger's version of the knowledge norm of assertion, which I do not contest. There are two action points here: the moves to lines (3) and (5). We consider them in turn.

Why should we accept (3)? Jäger introduces the set-up thus:
Consider a conversational context in which the contextualist states his theory. Such contexts are paradigmatic epistemology or 'philosophy classroom' contexts in which sceptical hypotheses are salient and taken seriously.
This appears to me to be mere assertion; why should classroom contexts be skeptical ones? I know that David Lewis said they were, but David Lewis said a lot of silly things in that paper. This just isn’t a commitment of contextualism per se. Contextualism is a linguistic thesis, made on the basis of, among other things, facts about linguistic use. If we were really taking skepticism seriously, we would not help ourselves to such facts. Some contexts in which epistemology of this sort is being performed—this one, for instance—are not very skeptical at all.

Actually, I think, there is a deeper problem for Jäger’s claim to (3). Contextualism is a controversial theory; lots of smart people think it’s wrong. It’s a theory that I accept, but I don’t think the acceptance here is a kind of outright belief, and I don’t think my statements of contextualism are appropriately regarded as assertions—as attempts to transmit knowledge. But the kind of “appropriate stating” at issue in (3) would have to be assertions in order for a knowledge norm of the latter to put any pressure on the move to (4). It’s far from obvious that anyone in ordinary contexts (let along skeptical ones) should go around asserting that contextualism is true. (And I say this as a contextualist! I think contextualism looks like the best theory. That doesn’t make it a thing to assert in ordinary contexts.)

So there are two pretty serious problems with premise (3). Now let’s set aside those problems for the purpose of further argument, and consider line (5). The inference from (4) to (5) is supposed to follow directly from the content of the contextualist theory. Jäger writes:
[Someone denying this step] would have to argue that the contextualist can legitimately deny [(5)], i.e. deny that he knows when he asserts his theory, in CHigh, that there are low-standards contexts in which (it is true to say that) he knows that he has hands. The claim that there are such quotidian contexts, however, is a cornerstone of classical, anti-sceptical forms of contextualism.
I don’t feel the motivation here at all. Contextualism the linguistic thesis does not entail that Keith or anyone knows that he has hands; it doesn’t even entail that Keith or anyone has hands. (Obviously.) It is consistent with the truth of contextualism that you and I are brains in vats. So even if we became convinced that we could have high-standards knowledge of contextualism—say, because we have introspective access to meaning, and that access is more resistant to skeptical scenarios than perceptual knowledge—we could still abandon Jäger’s ship at the step to line (5). Sure, Keith knows-HIGH that contextualism is true; that doesn’t mean he knows-HIGH that he knows-LOW that he has hands—even granting whatever intra-context closure principle you want. We’d get that he knows-HIGH that in C-LOW, he’d be invoking a relatively weak standard if he said “I know I have hands”. It doesn’t follow that he knows whether he’d meet it.

So there are lots of ways to resist this argument.

Monday, October 15, 2012

Acting on knowledge under uncertainty

Susan, who is entertaining later this evening, is about to walk to the local market to buy some olives for cocktails. She now faces that famous perennial question whether to take her umbrella. We stipulate:

  • Susan does not know whether it will rain during her walk.
  • Susan's rational credence that it will rain during her walk is 0.4.
  • The nuisance of carrying the umbrella on her walk will cost Susan 10 utils.
  • The nuisance of being rained upon without an umbrella is -30 utils. (It is no nuisance at all to be rained on if she has her umbrella.)
It's pretty reasonable to suppose that in this case, Susan ought to take the umbrella; we calculate the expected value pretty straightforwardly. The umbrella costs 10, and gives her a 0.4 chance of saving 30. (30 * 0.4) - 10 = +2. If Susan takes the umbrella in a way sensitive to the positive expected value of doing so, there's a pretty strong intuition to the effect that she's done everything right.

As you know, cases like this one are sometimes thought to be problematic for the knowledge norm of practical reasoning. Alex Jackson's careful paper does a nice job separating different knowledge-norm-like commitments, but he identifies this kind of case as at least a prima facie challenge to the following determination thesis:
What one knows determines what it is rational for one to do (possibly in concert with one’s desires).
I like this determination thesis. (In fact, I like something stronger -- I think we should be after something more like metaphysical grounding, not just determination.) So I need a story about the case of Susan. The Hawthorne & Stanley story is that Susan, if she is acting appropriately, is acting on knowledge about epistemic probabilities. She says to herself: there is a 0.4 chance that it will rain; this thing that she says to herself is what she treats as a reason. And it is something she knows.

I'm pretty uneasy about this line, although I've never been able to put my finger on exactly what I don't like about it. There is something odd, it seems to me, about probabilistic contents playing these kinds of roles. I know that's not an objection; I'm just recording my uneasiness. Here, anyway, is an objection: suppose she doesn't know the relevant probabilistic claim. Suppose that, for all she knows, the chance that it will rain is 0.3.

Remember, this is evidential probability we're talking about; the difference between the chance's being 0.3 and its being 0.4 can't be made by meteorological facts wholly outside Susan's ken. Still, it's not at all implausible that Susan might not know, with that level of precision, whether her evidence probabilifies rain to degree 0.3 or 0.4. Indeed, my own evidence right now, it seems to me, puts the chances of its raining on me as I walk to work tomorrow right around that ballpark; but I have no idea whether it is closer to 0.3 or to 0.4. I hope you agree this is not a very implausible possible situation.

Notice also that if the probability really is only 0.3, then, given the stipulations above, Susan's expected value for taking the umbrella is negative. ((30 * 0.3) - 10 = -1) So under the current stipulations, for all Susan knows, taking the umbrella might have negative expected value. She doesn't know that she should take it. She does, we may allow, know that there is some chance of rain, but this doesn't look like a good enough reason to perform this action.

You might think about trying to gild the bitter pill at this stage, suggesting that if she doesn't know whether it's better to take it, then she really does violate the action norm in taking it, although she does so in an excusable way. This seems to be Hawthorne & Stanley's line. But I don't think we should take it. For it's consistent with our case here that Sarah is exceptionally well-attuned to the evidence. That is, if the chance really were only 0.3, then she wouldn't take the umbrella. This is of course totally consistent with her failure of introspective discrimination.

I think Hawthorne & Stanley were right to look to knowledge with contents other than that it will rain, but wrong to focus in on probabilistic ones, in part for the reason just offered: there's no particular reason to expect them to be known. (And also in part because of that feeling I haven't managed yet to articulate, that these things aren't the right kinds of things to be invoking in one's reasoning.) While there's no reason, it seems to me, to think that Sarah must know the probabilistic content, there is, it seems to me, good reason to think she must have some other relevant knowledge around. If, for example, you think that E=K, then the evidential probability must be probability conditional on some knowledge. Let that knowledge be the reason for action. What is it that is the relevant evidence? I don't know, it depends on how the case is filled out. Maybe something a forecaster said? Maybe the look of the clouds? Whatever it is, I say we understand Susan as acting on the basis of that evidence.

Can you get a case like this involving no such evidence? Alex Jackson tries to give us one. But this blog post is getting long and I'm getting hungry, so maybe I'll leave discussion of that for another day. 

Thursday, October 11, 2012

Where does apriority live?

Here are some things that can be violent:

  • Neighborhoods
  • People
  • Actions
Violence inheres in these different kinds of things in different kinds of ways. A violent person is liable to punch you in the face if provoked; that the neighborhood will never punch you in the face doesn't count against its violence. Still, it's not like there's not a general category, violence, that applies in some sense to violent neighborhoods, violent people, and violent actions. These things are certainly connected somehow or other.

When you have this kind of set-up, you can sensibly ask which kind of entity is the best candidate for a more fundamental bearer of the property. To put it a bit colorfully: where does the violence live? Although I can imagine some people disagreeing, it seems to me pretty plausible that the violence of a neighborhood is explained by the violence of the people who populate it, rather than vice versa. Violence doesn't live in neighborhoods. And what makes a violent person? It seems to me that it has something to do with a propensity to perform violent actions. On this way of answering the question, violence ultimately lives in actions. But maybe not, maybe there's no real way to understand a violent action independently of the violent character traits that make a person violent. Maybe violence ultimately lives in people, or in character traits. I'd be curious to hear arguments about this interesting question. It's not my area.

But my area has some similarly interesting questions, too. Consider apriority. Here are some things that can be a priori:
  • Knowledge
  • Justification of beliefs
  • Justification for beliefs
If you believe in apriority, it's worth spending a bit of time thinking about where the apriority lives.

Tuesday, October 09, 2012

Review of Philosophy Without Intuitions

I've been working for a while now on a review of Herman Cappelen's book Philosophy Without Intuitions. (Here are my several blog posts about it from this fall.) I've now completed a first draft of a review. I include it in full below the jump here. (I also have a pdf here, if you prefer to read it that way.) Comments, as always, are welcome.

Thursday, October 04, 2012

Arguments for Cappelen's 'Centrality'

Philosophy Without Intuitions is an extended argument against Centrality, the thesis that philosophers rely on intuitions as evidence. Herman's official conclusion is that "on no sensible construal of 'intuition', 'rely on', 'philosophy', 'evidence', and 'philosopher' is it true that philosophers in general rely on intuitions as evidence when they do philosophy." The broad argumentative structure of the book is this: Herman says that defenders of Centrality don't offer arguments in its favor, but that "I take it two kinds of arguments are tacitly assumed". These two arguments he then goes on, in the two parts of his book, to consider in detail and refute.

The dialectical strategy, therefore, affords a defender of Centrality with two significant avenues of response, short of taking on Herman's arguments head-on:


  • One could maintain that Centrality carries enough prima facie plausibility that it does not require argument; in the absence of compelling arguments against Centrality, it is reasonable to accept it.
  • One could offer an argument for Centrality other than the two Herman considers.

With respect to (1), it may be helpful to consider an analogy. Contemporary archaeology widely assumes the existence of a mind-independent external world. Practically all archaeologists assume that the kind of idealism espoused by the late British Empiricists is false; they treat as perfectly coherent the idea, for example, that there might be a skull underground that no one will ever see or learn about. But although the assumption that there is a mind-independent external world is extremely widespread among archaeologists, one rarely sees arguments for this conclusion offered. And as philosophers well know, providing a cogent argument for this conclusion is not at all straightforward. But it's hard to take seriously the idea that this omission constitutes any serious error qua archaeologist -- we think that (a) our colleagues in the archaeology department are proceeding perfectly reasonably, and (b) their assumption is probably true, even if we're not sure how to provide an argument for it.

Can the defender of Centrality respond to Herman in a parallel fashion? To be sure, there are some differences here -- Centrality is a claim about how philosophy works, and the archaeologists' assumption is a claim about the broader world. But it's not clear why such a subject matter claim should make any important difference here. We have two claims: philosophers use intuitions as evidence, and there is an external world; both are widely assumed, and neither is given much argument. Indeed, there's a case to be made that both seem to share a deeper property as well: it is not at all clear how in principle one would go about investigating them empirically. So if one antecedently just considers it obvious that philosophers rely on intuitions as evidence, I am not at all sure that one will feel compelled by anything in the book to change one's mind.

With respect to (2), I think that some philosophers have been convinced that intuitions must be playing important evidential roles, not because it is obvious from watching how philosophers work, but because of epistemological concerns. The philosopher I have in mind takes her cue from the apparent epistemological difference between certain philosophical judgments -- say, the judgment that Mr. Truetemp doesn't have knowledge -- and paradigmatic empirical judgments -- say, the judgment that it was sunny in Vancouver today. There is a straightforward perceptual story to tell about my epistemic access in the latter case; it is one that affords a central role to certain of my perceptual experiences. But it doesn't look very much like my knowledge about Mr. Truetemp works in the same kind of way. There just aren't any sensory experiences that I've had that seem relevantly akin to the visual experiences that established my perceptual knowledge. It’s all very well to say that it needn’t be an intuition that’s doing the justifying here, but, unless one is offered an alternate story, one is bound to remain less than fully satisfied. Herman is quick to emphasise that there are arguments underwriting my judgment about Mr. Truetemp -- and he's right, and I think that's significant -- but arguments proceed on the basis of premises, and what story are we to tell about my epistemic access to the relevant premises? Insofar as it doesn't seem very plausible that perceptual experience can ultimately be establishing the premises from which I can conclude that Mr. Truetemp doesn't know, one might be tempted to think that it must be some other kind of experience, which plays a similar role to that of perceptual experience.

Call this line of thought the ‘What Else?’ Argument (WEA):

  1. People sometimes come to justified philosophical beliefs via armchair methods.
  2. In many of these cases, no sensory experience is playing justificatory roles.
  3. All justified beliefs must be mediated by something like sensory experience.
  4. Intuitions are the best candidates for such experiences in the cases in question. Therefore,
  5. In some cases, people come to justified philosophical beliefs with intuitions playing justificatory roles.

I do not endorse the WEA -- I reject premise (3). (You can also be a philosophy-skeptic, denying (1), or a Quinean, denying (2).) But I do think it plausible that it or something like it does motivate the thesis that intuitions are playing important evidential roles in philosophy. This is an epistemological argument, not a methodological one; it does not proceed, as the ones Herman considers do, on the basis of empirical claims about how philosophers go about constructing arguments (except for the uncontroversial-in-this-context premise (1)). The WEA-endorsing proponent of Centrality, it seems to me, escapes Herman's critique unscathed.

Tuesday, October 02, 2012

Casullo on Negative and Positive Approaches to Apriority

Chapter 1 of Al Casullo's book, A Priori Justification, explores various theories of a priori justification. In the penultimate step of the chapter, he's argued that the motivations behind various theories ultimately converge on two views -- one 'negative', and one 'positive'. This is from p. 31 (with his labels changed for simplicity):

  • (Neg) S's belief that p is justified a priori if and only if S's justification for the belief that p does not depend on experience.
  • (Pos) S's belief that p is justified a priori if and only if S's belief that p is justified by some nonexperiential source.
Thus stated, the negative characterization (N), Al says, is ambiguous, because of different ways in which justification can depend on experience. So (N) is disambiguated into:
  • (Neg-Weak) S's belief that p is justified a priori if and only if S's belief that p is nonexperientially justified.
  • (Neg-Strong) S's belief that p is justified a priori if and only if S's belief that p is nonexperientially justified and cannot be defeated by experience.
Of the three remaining views -- Pos, Neg-Weak, and Neg-Strong, Al goes on to claim that there are really only two, because Pos and Neg-Weak are equivalent. The rationale here seems to be the idea that  every justified belief has its justification due to a source, and any given source is either experiential or nonexperiential. Take a justified belief, and consider the binary question of whether its justification's source is experiential; Neg-Weak says yes if it is; Pos says no if it isn't.

But I think this bit of reasoning is too quick. I'm suspicious of the move from nonexperiential justification to derivation from a nonexperiential source. To equate Pos with Neg-Weak is to legislate in advance that for any justified beliefs, there is a source of its justification. That is to say, it assumes prior to argument that there is no original justification -- justification that does not depend on a source. But that there is such original justification is, it seems to me, a coherent view that occupies a spot in logical space. (For what it's worth, I also think it's true; Ben and I defend it in The Rules of Thought.)

Sources generate things that weren't already there. The assumption that justification for a priori justified belief must derive from a source is, I think, part of the motivation for supposing there must be some kind of faculty of intuition to serve as source.

I'm not sure whether there are nonexperiential sources of justification. But I'm firmly committed to beliefs that are justified in a way that doesn't depend on experience. If these two attitudes are jointly coherent, then Casullo is wrong to equate Pos with Neg-Weak.

Friday, September 28, 2012

New version of JPK; Thought Blog

Just a quick note to anyone who might be interested that I have posted a revised version of my paper, "Justification is Potential Knowledge," to my works in progress page. I expect to be submitting it in the next month or so, so if you have any comments or ideas, I'd very much welcome them.

Also, Thought now has a new blog devoted to discussion of articles that appear in that journal. So far, Brian Weatherson has written a thoughtful post about my paper, "Knowledge Norms and Acting Well." Check it out here.

Friday, September 21, 2012

Very Crazy Beliefs and JPK

Following up on yesterday's post, here is another kind of case that Jessica pressed me about last summer. Again, I'm defending:
JPK: S’s belief is justified iff S has a possible counterpart, alike to S in all relevant intrinsic respects, whose corresponding belief is knowledge.
Sometimes the counterpart belief has a different content. As in the kind of case I discussed yesterday, Jessica is worried that this flexibility about content makes the view too liberal. We can put the worry this way: maybe my view has the implication that very crazy beliefs might end up justified, because they might be realized by intrinsic states that are consistent with knowledge in very different social environments. By a ‘very crazy’ belief, I mean a belief that is so implausible, it would usually be better to attribute to a subject who seemed to express it a deviant meaning. To take Burge’s famous example:
If a generally competent and reasonable speaker thinks that ‘orangutan’ applies to a fruit drink, we would be reluctant, and it would unquestionably be misleading, to take his words as revealing that he thinks he has been drinking orangutans for breakfast for the last few weeks. Such total misunderstanding often seems to block literalistic mental content attribution, at least in cases where we are not directly characterizing his mistake.

The reason it is usually preferable to attribute linguistic confusion instead of very crazy beliefs is that to attribute the very crazy belief would be to attribute a radically unjustified one. So it might be a problem if my view ended up saying that such beliefs are justified.

Suppose that Emily is at the breakfast table, drinking orange juice, and expressing what appears to be the very crazy belief that she is drinking orangutans. For example, she says, in a tone of voice not at all suggestive of joking, “I try to drink orangutans every morning, because they are high in vitamin C.” Does JPK have the implication that her very crazy belief is justified? Here is a line of reasoning that suggests that it does -- thanks again to Jessica for articulating it to me last summer. There is nothing intrinsic to Emily that guarantees that her linguistic community does not use ‘orangutan’ to refer to orange juice. Consider her intrinsic duplicate, Emily*, in a world where everybody else treats the word ‘orangutan’ as referring to orange juice. Emily* speaks and believes truly, expressing the belief that she is drinking orange juice. If Emily*’s belief constitutes knowledge, then JPK has it that Emily’s is justified.

Notice that for a JPK theorist to avoid this implication, it is not enough to point to a possible version of the case in which Emily*’s belief falls short of knowledge—this would be easy. According to JPK, Emily’s belief is justified if any one of her possible intrinsic duplicates has knowledge. So to avoid the conclusion that very crazy beliefs like Emily’s are justified, the JPK theorist must argue that all of Emily’s possible intrinsic duplicates fail to know.

Let us consider further what Emily and Emily* must be like; further details of the case will be relevant. Part of what makes Emily’s belief so very crazy is that she is so out of touch with her linguistic community. On the most natural way of filling out the case, Emily sometimes encounters uses of the word ‘orangutan’ that strike her as very strange. Since she thinks that orangutans are a kind of fruit juice, she doesn’t really know what to make of the plaque that says ‘orangutans’ at the zoo, next some Asian primates. That sign seems to Emily to suggest that these apes are orange juice! Perhaps she thinks it is an error, or a prank. Or maybe it’s a sign not for the exhibit, but for some unseen orange juice nearby. On this most natural way of understanding the case, Emily has lots of evidence that her way of understanding ‘orangutan’ is wrong. This evidence impacts her internal state; all of her intrinsic duplicates also have evidence suggesting that ‘orangutan’ doesn’t mean what she thinks it does. There is, I suggest, every reason to consider this evidence to be a defeater to those duplicates’ knowledge. Even though Emily* expresses a truth when she says to herself, “orangutans are a kind of fruit juice,” she has lots of evidence that this is false. That evidence prevents Emily* from knowing; so JPK need not say that Emily’s very crazy belief is justified.

However, the analysis of the previous paragraph relied on a particular interpretation of the case. Although it is, I maintain, the most natural one, it is not the only possible one. What if we suppose that Emily has no internal evidence against the correctness of her bizarre use of ‘orangutan’? In this case, Emily* will have no defeater; might she therefore have knowledge? It depends, I think, on how each came to have their way of using the term. Suppose that, although ‘orangutan’ functions as it actually does in Emily’s linguistic community, she has never been taught this term. She spontaneously decided, for no particular reason, to use the term to refer to orange juice, and it’s just a coincidence that it happens to be the same word as that used in the wider community for orangutans. We can suppose that she’s encountered the word from time to time, but in impoverished contexts which provide no reason to suspect that her usage is incorrect. For example, she sometimes overhears people saying “I like orangutans”, without the additional context that would cue her into supposing this to be anything other than an expression of esteem for orange juice. (We include this limited contact to make it plausible that she really is using the public term.) In this case, Emily has formed beliefs about the meaning of a public term rather irresponsibly; this fact will be reflected in her intrinsic state. So Emily*, too, will have come irresponsibly to believe that “orangutan” means orange juice; even though her belief is true, it is not knowledge. So on this version of the case, too, the JPK theorist can avoid attributing justified belief to Emily.

What if, instead, we suppose that Emily thinks that orangutans are orange juice because of misleading evidence to the effect that ‘orangutan’ means orange juice? Her older brother, perhaps, decided it’d be funny to teach her that as a gullible child, and she’s never encountered evidence to the contrary. Now Emily’s belief looks like it might well have been formed responsibly. So there is no obvious obstacle to suggesting that Emily* has knowledge. For in this case, it looks like JPK will suggest that Emily’s belief, very crazy though its content is, is justified after all. This strikes me as the correct result; it is a familiar instance of a false belief that is justified on the proper basis of misleading evidence.

So it seems to me that JPK does not have problematically liberal implications about the justification of very crazy beliefs. For many of the most plausible version of very crazy beliefs, they will come along with intrinsic features that are inconsistent with knowledge; for those that do not, it is intuitively plausible to attribute justified belief.

Thursday, September 20, 2012

Slow Switch, JPK, and Validity

I've been working for a while on a paper defending this knowledge-first approach to doxastic justification:
JPK: S’s belief is justified iff S has a possible counterpart, alike to S in all relevant intrinsic respects, whose corresponding belief is knowledge.
One of the moves I often use in that paper involves an exploitation of content externalism. My belief is justified if my intrinsically identical counterpart's is knowledge -- but his knowledge needn't have the same content as my justified belief.

When I presented this paper in St Andrews last summer, Jessica Brown expressed the worry that the kind of externalism I was relying on made the view too liberal. She was worried that there'd be cases of intuitively unjustified beliefs, even though there were intrinsically identical counterparts who had knowledge. One possible family of such cases involves 'slow switch' subjects -- people who change in their external environments in a way that implies a change in content, without realizing it. Suppose someone commits the fallacy of equivocation because she has been slow-switched; won't JPK have the implication that her resultant belief is justified anyway? I think that JPK probably will have this implication. But I don't think it's intuitively the wrong one. Consider a case. This example is borrowed from Jessica's book.

Sally starts out a regular earth-person, who knows some stuff about aluminum, but not much detail about its chemical composition. She does know, however, that some of it is mined in Australia, so she says to herself:

(1) Aluminum is mined in Australia.

One night, unbeknownst to Sally, she is moved to Twin Earth, where there is no aluminum, but instead, some superficially similar stuff twaluminum. After a while there, Sally has thoughts about twaluminum. In particular, she comes to know that some of it is mined in the Soviet Union. This is the belief she expresses when she says to herself:

(2) Aluminum is mined in the Soviet Union.

She hasn't forgotten her previous knowledge, so she still knows (1). And she's unaware that "aluminum" in (1) refers to different stuff than does "aluminum" in (2). So she might well put these together and infer her way to a claim she'd express by:

(3) Something is mined in Australia and the Soviet Union.

Sally equivocates on "aluminum", because of her external environment. Intrinsically, she's just like her counterpart who stays on regular earth and comes to know that aluminum is mined in both places, and (validly) concludes that something is mined in both places. So JPK has it that Sally's conclusion, (3), expresses knowledge, even though it is the product of equivocation.

I think, however, that this is exactly the right result. It is intuitively plausible to suppose that Sally's appearances are misleading, but her belief is justified. This mean that invalid reasoning doesn't always interfere with the justification of beliefs. But that's what we should think about this case, independently of JPK.

I'd be interested to hear any thoughts from any readers, but especially answers to these:

(a) Do you agree that it is intuitive to describe Sally's conclusion (3) as justified?
(b) Do you see any other, more problematic implications for JPK that derive from slow-switch considerations?

Thursday, September 13, 2012

Cappelen on Explaining Away Intuitions

The 6-page Chapter 5 of Herman Cappelen's Philosophy Without Intuitions is an argument against "Explain":
Explain: Suppose A has shown (or at least provided good arguments in favor of) not-p. If many of A's interlocutors (and maybe A herself) are inclined to sincerely utter (and so commit to) 'Intuitively, p', then A is under an intellectual obligation to explain why this is the case (i.e. why there was or is this inclination to utter and commit to 'Intuitively, p'). She should not full-out endorse not-p before she has discharged this obligation.
(The metasemantic character of this principle is because Herman thinks that 'intuitively' is context-sensitive, and that this is the only way to capture the attitude in its generality.) Against Explain, Herman considers various things that people might mean by sentences like "intuitively, p", and suggests that, for each of them, Explain looks pretty unmotivated. For example, when considering the idea that it means something like "at the outset of inquiry we believed or were inclined to believe that p", he writes:
When 'Intuitively, p' is so interpreted, it is hard to see any reason to accept Explain. Suppose a philosopher A has presented a good argument for not-p. The fact that some judge or are inclined to judge that p before thinking carefully about the topic isn't something that in general needs to be explained by A. The question under discussion is whether p is the case. The argument for not-p addressed and settled that question. (90)
As with so much of Herman's book, I'm in agreement with the main thrust here. In a paper I wrote on this topic a little while back ("Explaining Away Intuitions"), I said, along very similar lines to Herman's, that:

Widespread practice notwithstanding, it is not prima facie obvious why philosophers should, in general, be concerned with explaining intuitions, or with explaining them away. Intuitions are psychological entities; philosophical theories are not, in general, psychological theories. Ontologists theorize about what there is; it is quite another matter, one might think, what people think there is. Epistemologists concern themselves with knowledge, not with folk intuitions about knowledge.
And:
If I’m to theorize about, say, the nature of reference, I should not feel at all guilty if I fail to explain why people like chocolate, or why the Detroit Lions are so bad. Why should I feel differently about the fact that some people think that in Kripke’s story, the name ‘Gödel’ refers to Schmidt? Th is psychological fact is interesting, and is, it seems to me, well worth explaining. But it is not clear why it should be the reference theorist’s job to explain it. His job is to explain reference, not to explain intuitions about reference.

Obviously, with respect to these passages, Herman and I are very much on the same page. However, I went on in that paper to make a major caveat, which I think Herman may be overlooking. Sometimes, considerations having to do with intuitions are relevant to the nonpsychological question, too. This may be so even if the evidence doesn't derive from intuition in any interesting sense. (In other words, I don't think that the plausible version of Explain relies on the truth of anything like Centrality.) I agree with Herman that if you have an argument that you recognize to be conclusive for a given philosophical thesis, then you don't have to worry too much about other people's intuitions. But sometimes you need to worry about those intuitions in order to be able to recognize that an argument is conclusive. Recognizing that an argument is a good one is a cognitive achievement, and intuitions can be relevant for whether this achievement is met. They might, for example, defeat one's justification for a given premise.

This observation isn't inconsistent with the letter of Herman's remarks in Chapter 5. In a footnote, he clarifies that his scope is limited: "[t]o make things simple, I assume that we have settled that not-p (or at elast made it sufficiently plausible for us to endorse it) and that all that remains a sa potential obstacle is the commitment to 'Intuitively, p'." But this isn't always—or, in the interesting cases, even very often—the case. As I wrote in my paper mentioned above:

Sometimes, for instance, a philosopher may be deliberating about a particular view, without being at all sure what to think. I find in myself conflicting intuitions, and don’t know which to endorse. If I can see that one of those intuitions is a member of a class that I’m likely to find appealing even if false, this might provide me with some reason to prefer the other. The Horowitz case provides a nice example: if I am in internal tension between (a) the thought that it’s better to do that which results in more lives being saved, and (b) the thought that it’s wrong to kill somebody in a way over and above the way it’s wrong to let somebody die, I may, if I’m convinced by her explaining‐away, discount (b) as the product of a general error in rationality.
So I think that by focusing on the cases where one has already identified a conclusive argument, Herman is probably not looking at the best candidates for situations in which consideration of intuitions might be interesting.

Wednesday, September 12, 2012

Cappelen on Intuition and Philosophical Exceptionalism

I'm reading through Herman Cappelen's Philosophy Without Intuitions again, trying to settle on a few discussion points to pull out for a review. The book is an extended argument against 'Centrality', the thesis that
[c]ontemporary analytic philosophers rely on intuitions as evidence (or as a source of evidence) for philosophical theories. (3)
Centrality, Herman says, is a widely-held misconception of philosophy, which has confused quite a lot of metaphilosophical theorizing, but hasn't had much effect on first-order philosophical argumentation. I'm broadly sympathetic to this conclusion, but there are a few respects in which I suspect Herman's argument might be too quick. I'll probably blog about several of them; here is one.

When clarifying the target of his critique, Herman specifies that the interesting version of Centrality should be interpreted as applying distinctively to philosophy, as opposed to other disciplines:
Since Centrality is a claim about what is characteristic of philosophers, it should not be construed as an instantiation of a universal claim about all intellectual activity or even a very wide domain of intellectual activity. Suppose that all human cognition (or a very wide domain of intellectual life) appeals to intuitions as evidence, from which we can derive as a special instance that philosophers appeal to intuitions as evidence. Such a view would not vindicated Centrality, since according to Centrality the appeal to intuitions as evidence is meant to differentiate philosophy—and, perhaps, a few other kindred disciplines—from inquires into the migration patterns of salmon or inflation in Argentina, say. If it turns out that the alleged reliance on intuitions is universal or extends far beyond philosophy and other allegedly a priori disciplines, that would undermine Centrality as it is construed in this work. ... As a result, it will be crucial when evaluating an argument for the significance of intuitions to keep track of its scope. An argument that shows that all intellectual activity relies on intuitions as evidence, and then derives Centrality as a corollary, will not be acceptable given how Centrality is presented by its proponents. (16)
I think that Herman is overlooking the following possibility, which is worthy of consideration: evidential reliance on intuition is ubiquitous, and not distinctive of philosophy. However, philosophy is unusual in that (a) a higher proportion of the interesting action involves the contribution of intuition than it tends to in other fields, and (b) in some canonical instances of philosophy, intuition provides all the relevant evidence. According to the picture I'm thinking of, intuition is one important source of evidence everywhere, and it's playing a particularly interesting role in philosophy. If this were true, I think it would vindicate an interesting version of Centrality, and one that makes a reckoning with the epistemic significance of intuition a pressing issue for philosophers, even though it did not claim that intuition is not a source of evidence in other realms. I'm inclined to interpret at least many of those philosophers who do emphasize the role of intuitions in philosophy as thinking in something like this way.

I do think the view sketched here is wrong; establishing this is one of the central aims of my forthcoming book with Ben Jarvis. But I don't think Herman is right to leave it off of the conceptual map.

Friday, May 18, 2012

Two new drafts

I have drafts available online now of two projects I've been working on recently. One is my paper, "Justification is Potential Knowledge," which defends a knowledge-first account of doxastic justification. The other is a draft of an "Analysis of Knowledge" SEP entry. (The current entry, by Matthias Steup, is in need of revisions, and Matthias has taken me on a co-author for that purpose.) Comments on either are very welcome!

I'm off to Scotland tomorrow, for an extended visit to the NIP.

Monday, May 07, 2012

Correction in "Knowledge Norms and Acting Well"

I was very pleased to have my short discussion on evaluating the knowledge norm of practical reasoning appear in the inaugural issue of Thought. Unfortunately, I've just noticed that there are two errors near the end of the published version of the paper. One, which is entirely my fault, is that I misspelled Mikkel Gerken's name. I'm very sorry, Mikkel!

The second error, which seems to have been introduced in copyediting, is more likely to interfere with comprehension. So I thought I should at least set the record straight here. The penultimate paragraph of my paper was meant to run thus:
The point cuts in both directions: pairs of intuitions like the ones featured above cannot be used to refute the knowledge norm of practical reasoning; neither can cases of that include both knowledge and appropriate action, or both ignorance and apt criticism of action, be used to speak directly in favor of the knowledge norm. The same point applies to attempts to evaluate knowledge norms from the other side: just as one can’t get very far from arguments of the form ‘S knows that p, but oughtn’t to Φ, neither can one get very far from arguments of the form ‘S doesn’t know that p, but it would be correct to Φ.’ Relatedly, pairs of cases that differ with respect to knowledge, but are alike with respect to appropriate action—as is plausible, for instance, with knowers and their Gettierized counterparts—do not bear at all directly on knowledge norms (contra  Gerkin (2011), pp. 535-36; Smithies (2011), p. 5). The knowledge norm identifies knowledge with reasons, but the facts about what reasons one has do not supervene on the facts about what actions are appropriate. (Perhaps there is supervenience in the other direction.)

The penultimate sentence of this paragraph unfortunately became rather mangled. (I regret that I whiffed my chance of catching it in proof corrections.) This is what was printed:
...Relatedly, pairs of cases that differ with respect to knowledge, but are alike with respect to appropriate action—as is plausible, for instance, with knowers and their Gettierized counterparts—do not bear at all directly on knowledge norms (contra Gerkin 2011, pp. 535–536; Smithies 2011, p. 5). The knowledge norm identifies knowledge with reasons, but the facts about what reasons one has to do does not supervene on the facts about what actions are appropriate. (Perhaps, there is supervenience in the other direction.)

The point I was trying to make was that everyone should agree that sometimes, pairs of subjects who have distinct reasons available to them ought nevertheless to perform the same actions—different reasons may point in the same direction. And given the knowledge norm, this is pretty plausible in the case of Gettier subjects and their knowledgable counterparts. Henry in fake barn country and twin-Henry in real barn country do not share all the same reasons: twin-Henry has the proposition that there is a barn in front of him, and Henry does not. Nevertheless, if they're both allergic to barns, they are each reasonable in stepping away from the structure before him. Twin-Henry's action is made reasonable by the reason that there is a barn before him (combined with his allergy and interests); Henry's action is made reasonable by the reason that there is a building that looks just like a barn before him (combined with his allergy and interests).

Saturday, March 03, 2012

Dretske, Information, and Knowledge

There's a philosophy of mind reading group at UBC, reading Dretske's (1981) Knowledge and the Flow of Information this spring. I've never made a proper study of Dretske's work before, so I'm finding it extremely useful and interesting. In yesterday's reading group, I had an idea that I'd like to explore a bit further; consider this blog post a rather preliminary rumination.

First, some background -- both to clue in any readers who are interested in reading but don't know the Dretske, and so that I can make sure I have his framework clear in my own head.

Wednesday, February 29, 2012

Goldberg on Gettier Cases and Internalism

Sanford Goldberg has an interesting new argument against mentalist internalism about justification in Analysis. I'm working on committing myself to an internalist approach to justification at the moment; Goldberg's new paper isn't enough to force me to reconsider.

The master argument of the paper, which Goldberg lays out quite succinctly, is this, which I quote:
P1. The property of being doxastically justified just is that property which turns true unGettiered belief into knowledge.

P2. No property that is internal in the Justification Internalist’s sense is the property which turns true unGettiered belief into knowledge.

Therefore

C. No property that is internal in the Justification Internalist’s sense is the property of being doxastically justified.

I think internalists have two fairly natural lines of defence. First, one might reject the very notion of some property that turns true unGettiered belief into knowledge, at least if we read 'turns into' in some kind of truth-making sort of way. No doubt there is in some weak sense a property P such that one has knowledge if and only if one has true belief, has P, and is not in a Gettier situation, but I see no reason to suppose that it will be a property any more interesting or natural than the disjunction, knows or false or Gettiered. (I rather suspect "Gettiered" itself can be understood at best conjunctively.) And I don't think there's any interesting sense in which this disjunction turns unGettiered true belief into knowledge.

In defence of this way of setting the issue up, Goldberg writes:
After all, ‘doxastic justification’ is a term of art, and so if we are to continue to use it, it must pick out something that is epistemically interesting. It picks out something epistemically interesting if P1 is true; but it is unclear whether it picks out something interesting if P1 is false. At a minimum, the burden of proof will be on those internalists who deny P1: if this is how they respond to the present argument, then we are owed an explanation of why we should care about the property of which the internalist is purporting to give us an account.

But there are other fairly natural reasons to care about justification available. For example, justification may be that property which permits knowledge, without being one that guarantees it.

The second way an internalist might resist Goldberg's argument is to reject the considerations he brings to bear in favor of his P2. He imagines someone in an evil demon situation who is an intrinsic duplicate of someone with a justified belief. Take her perceptual belief that p. Her belief must be justified, by the internalist's lights, but is not knowledge, since she is in an evil demon scenario. It is not knowledge, even if it happens to be true. This doesn't support the argument unless we can also establish that this is not a Gettier case; at the moment it rather looks like one. (She has misleading evidence for p, and reasonably forms the belief that p on that basis; it turns out that p happens to be true.)

To close off this avenue, Goldberg asks us to suppose that it is probable that our subjects beliefs are true, due to the machinations of the demon.
Still, it is easy to tell yet another variant of the Evil Demon case on which this move – to explain away the ‘no knowledge’ verdict by appeal to Gettierizing luck – is not plausible in the least. Imagine the following scenario, involving the Not-so-Evil Demon: it is just like the ordinary Evil Demon scenario except the Not-so-Evil Demon has conspired to make 65% of your Doppelgänger’s beliefs true (the other 35% being false owing to systematic illusions sustained by Not-so-Evil). Imagine your Doppelgänger in this world. For any perceptual belief (s)he has, there is a 65% chance that the belief is true. If it’s true, this is not merely lucky.

But stipulating facts about luck is a dangerous game. There is of course some sense in which the not-so-evil demon victim isn't merely lucky to believe truly, but is it the one relevant to Gettier cases? Probably not. Nothing in Gettier's original cases precludes probability of true belief of this sort. Go back to Jones and the Ford and Brown in Barcelona; suppose Brown is in Barcelona 65% of the time, and Smith believes that Jones has a Ford or Brown is in Barcelona, as in the original case, solely on the basis of the misleading evidence about the Ford. This is still a paradigmatic Gettier situation, even though there may be some sense in which the belief is true not merely by luck. Given this parallel, I think the internalist has every reason to regard the subject of the not-so-evil demon as in a Gettier case. So there are good grounds for resisting Goldberg's argument.

Monday, February 13, 2012

Metaphysical and Conceptual Knowledge Connections

Knowledge shows up in theories a lot lately. Or should I say that 'knowledge' shows up in statements of theories? One question I'm hoping to research a fair amount in the near future concerns the status of theoretical claims about knowledge. The knowledge first program, broadly construed, says that knowledge has some kind of priority or privileged status, which makes it a good candidate to explain other states. (My broad construction applies not just to the Williamson project, but to all of those recent projects that posit strong theoretical roles for knowledge, such as the knowledge-action links of Hawthorne and Stanley.) Here's a question I'm interested in: how should we understand the knowledge first attitude? Here are two candidate interpretations:

  1. Knowledge, the mental state, is metaphysically (relatively) fundamental; it is among the (more) basic building blocks of the world. Questions about knowledge are questions about the (relatively) natural epistemic joints.

  2. KNOWLEDGE, the concept, is conceptually (relatively) fundamental; it is among the (more) basic ideas in our understanding of the world. Questions about knowledge are questions about our (relatively) fundamental conceptual framework.


(The hedges there indicate that knowledge 'first' should surely not be meant to imply absolute priority; one can subscribe, for instance, to the metaphysical interpretation of the knowledge first project and still believe that physical particles are the most fundamental bits of the universe; knowledge is prior to most of psychology and epistemology, perhaps, but not prior to physics.)

My suspicion, which I'm not yet in a position to make good on, is that a lot of authors are fairly indiscriminate about this distinction, and furthermore that it matters. But I'm not at all ready to argue for that claim; I need to re-read a lot of this literature with the question in mind. In this blog post, however, I'll highlight a number of passages that suggest each of the readings. Inclusion on this list is not meant as an indication either that the author endorses one interpretation over the other, or that the author is in any way confused on the matter; this is just a list of passages that strike me as suggestive of one of the two views, so that eventually I can look back and have a whole list of material to scrutinize.

I'll continue to update this blog post as I find passages that appear relevant. Suggestions, of course, are extremely welcome!

Knowledge, stakes, and closure

I've been sitting in on, and enjoying, Carrie Jenkins's grad seminar in epistemology. Today, one of our grad students, Kousaku Yui, brought up a pretty interesting suggestion in response to Jason Stanley's stakes-relative approach to knowledge. I didn't recognize the point as one that I've seen discussed before -- if there is a literature on it, I'd be very interested to see it.

The worry is this. Jason thinks that when the stakes are high, it's harder to know. But stakes aren't just a feature of an individual at a time; stakes are high for certain propositions when the truth or falsehood of those propositions make a big difference. It's possible to be such that the stakes for p are high, but the stakes for q are low. For example, it may be very important to Hannah and her wife Sarah whether the bank is open tomorrow, but not at all important to them whether it will rain tomorrow. In such a case, they would need to meet more exacting 'standards' in order to know about the bank than they would to know about the rain. That's a little bit counterintuitive, but only in the way that pragmatic encroachment is generally a little bit counterintuitive.

But here's what might be a deeper problem. Suppose someone is in a situation like the one just mentioned -- the stakes for p are high, but the stakes for q are low -- but where the subject knows that if q, then p. If so, then it's easy to know q, but hard to know p; but it looks like anyone who knows q could easily infer p. Closure plus the possibility of a case with this structure looks like they entail that the stakes-sensitive view can't be right.

Do we have to say such cases are possible? I don't see anything that forces us to, but certain cases are very naturally described in that way. Suppose Hannah and Sarah have an important bill, as per the standard high-stakes bank case; it's very important to them whether the bank will be open on Saturday. Suppose also that they have a friend Franklin who is a bank teller, and they have some small interest in whether he will be at the bank on Saturday. Here, however, the stakes are low -- nothing much hangs on whether they're correct about Franklin's location on Saturday. Assume that they have a good enough position for arbitrary strong knowledge standards for the proposition that Franklin will be at the bank only if it is open. So we have:

  • p: The bank is open Saturday

  • q: Franklin is at the bank Saturday

  • The stakes for p are high

  • The stakes for q are low

  • Everyone knows that if q, then p.


If Hannah and Sarah have a middling epistemic position with respect to q, then it looks like they're in a position to know q, but not to know p. But this violates closure.

Might Jason say that in such a case, the high stakes for p force the stakes up for q as well? He might, but it seems like a pretty strange thing to say. Intuitively, it doesn't matter to them much at all whether Franklin is at work on Saturday. Their bill situation has nothing to do with Franklin. Maybe we can wrap our heads around the idea that the bill makes it harder to know that the bank is open -- but can it really make it harder to know where their friends are?

Wednesday, February 08, 2012

E = K as foundationalism?

I'm re-reading Timothy Williamson's Knowledge and Its Limits for a reading group at UBC. I'm struck by this passage, from the introduction to Chapter 9 on Evidence.
[W]e may speculate that standard accounts of justification have failed to deal convincingly with the traditional problem the regress of justifications—what justifies the justifiers?—because they have forbidden themselves to use the concept knowledge. E = K suggests a very modest kind of foundationalism, on which all one's knowledge serves as the foundation for all one's justified beliefs.

I'm not at all sure what to make of this. I'm very impressed by E = K, but I have a hard time seeing reason to accept either of these claims:

  1. E=K is a kind of foundationalism

  2. E=K provides a solution to the traditional problem of the regress


Here's the story about foundationalism and the regress that I tell to my undergrads. I think it's pretty standard; if its somehow idiosyncratic, I hope someone will tell me. Everybody thinks that the justification for some beliefs depends on other justified beliefs. How do those other beliefs get justified? Maybe by yet further justified beliefs. Foundationalism is the thesis that there are basically justified beliefs -- beliefs that are justified in some other way than by being supported by other justified beliefs. If you're not a foundationalist, then you think that all justified beliefs are justified by other justified beliefs; for any given justified belief, there must be a chain of justified beliefs in successive support relationships that never ends, either because it continues infinitely, or because it doubles back on itself. Insofar as these latter two options are implausible forms of regress, there is intuitive support for foundationalism.

So as I understand it, what it is to be a foundationalist is to think that there are basic beliefs — i.e., beliefs that are justified, not in virtue of being supported by other justified beliefs. I'm surprised to see Williamson suggest that his view is a foundationalist one; E = K appears to me to be neutral on the question of whether there are basic beliefs. The Knowledge First project is consistent with the traditional idea that knowledge entails justified belief; I don't think it's a stretch to say that, on Williamson's view, knowledge is a (special, metaphysically privileged) kind of justified belief.

So if one's knowledge is among one's justified beliefs, then read literally, the claim that "[all of] one's knowledge serves as the basis of all one's justified beliefs" is tantamount to the claim that the chains of justification of the sort foundationalists talk about are in fact circular: some of my justified beliefs—the knowledgable ones, at least—are supported by chains that include themselves. But this is anathema to foundationalism, as the label for that view makes vivid.

Maybe I'm reading uncharitably literally; the thesis is that the knowledge is basic, and it supports the mere justified beliefs. All the knowledge is at the bottom of the pyramid and nowhere else. This now looks like foundationalism, but it carries the commitment that all knowledge is basic: all knowledgable beliefs are justified, not in virtue of being supported by other justified beliefs. This is a stronger claim than any I'd thought Williamson was committed to; I'm not sure it's particularly plausible. There is such a thing as inferential knowledge; in such cases, it seems very intuitive that justification depends on justification of the beliefs from which it's inferred. If you're a knowledge first program, you shouldn't think that's the main thing or the fundamental thing or the most interesting thing going on -- knowledge first people should be more excited about the fact that the knowledge of the conclusion flows from the knowledge of the premise -- but I see no reason to deny that there's also justificatory dependence at a less fundamental level. But foundationalism is (I thought) precisely about justificatory independence.

So what's going on? Does Williamson intend a weaker sense of 'foundationalism'? Or am I wrong about what the traditional sense would require, given his comments? Or is Williamson really committed to the thesis that if S knows that p, then S's justification for p does not depend on S's justification for any other proposition?

Wednesday, February 01, 2012

Rationality and Fregean Content

I haven't been updating my blog since moving to UBC last fall, partly because I've been busy preparing new courses and grant applications and settling into a new city. (My two biggest professional bits of news over the last while, for anyone interested who hasn't already heard elsewhere, are that The Rules of Thought, my book with Ben Jarvis, is now under contract with OUP, and I'll be beginning an Assistant Professorship at UBC this summer.)

I'm now starting to shift back into research mode, however, and blog activity may come back up accordingly.

One of the philosophy books that has been on my 'to-read' list for a long time is Jessica Brown's Anti-Individualism and Knowledge; I've been interested in the relationship between mental content and epistemology for a while now. Of course if I'd been cleverer about it, I'd've read the book while I worked at St Andrews and spoke to Jessica regularly, but: better late than never.

Among the interesting things Jessica is up to in her book is an argument that Fregeanism about content is inconsistent with -- or at least, fits poorly with -- anti-individualism. This is the negation of one of the chapters of The Rules of Thought, so I wanted to attend especially to the argument. (Thanks to Sandy Goldberg for bringing this connection to my attention recently.)

One of Jessica's arguments boils down to this. (I'm looking at pp. 200-201.)

  1. Fregean sense depends for its motivation on the transparency of sameness of mental content.

  2. Anti-individualism is inconsistent with the transparency of sameness of mental content.

  3. Therefore, if anti-individualism is true, then Fregean sense is unmotivated.


In defense of (1), Jessica suggests that, were it possible for a subject to be wrong about whether two token concepts express the same content, the failure to make logically valid inferences would be consistent with full rationality. Celeste is in a Frege case.
Celeste fails to make the simple valid inference ... since she does not realize that the relevant thought constituents have the same content and thus that the inference is valid. Further, she can come to the correct view only by using empirical information. On this view, her failure to make the simple valid inference does not impugn her rationality, for even a rational subject would fail to make a valid inference that she does not realize is valid.

Jessica suggests that Fregeanism is motivated by the possibility of rationally holding what would be according to non-Fregean views contradictory sets of beliefs, or rationally declining to infer according to what such views would say are logically valid inferences. I agree -- a central motivation for Fregeanism is to explain why there's nothing irrational about believing Hesperus to be F and believing Phosphorus not to be F. But why does this rely on the assumption of the transparency of sameness of content? Jessica says in the passage above that there is an alternate explanation available, if transparency is denied: one doesn't make what is in fact a logically valid inference because one doesn't realize that it is valid, and this is consistent with full rationality.

Jessica's argument seems to rely on this claim:

(Reflection) If a subject doesn't realize that an inference is valid, then she faces no rational pressure to make it.

But Reflection strikes me as a pretty dubious principle in generality. Suppose somebody is pretty dense, and fails to realize that modus tollens is a valid inference form, and so fails to realize that various instances of it are valid. She sits there and thinks if it has an even number, then it's red and it's not red, and finds herself with no inclination to infer it has no even number. Surely her ignorance doesn't excuse her rational failure. So Reflection is false in generality; so arguments that rely on Reflection are unsound. It looks to me like Jessica is relying on Reflection, so I think her argument is unsound.

That said, there is admittedly an intuitive difference between my dense character and Jessica's ignorant one -- Jessica's character's failure to infer in accordance with valid inferences would be corrected by suitable empirical information; mine presumably wouldn't. Could this motivate a weakening of Reflection to render Jessica's verdict while avoiding the problematic one? Maybe, but it looks to me like it'd end up pretty ad hoc. (One upshot of Timothy Williamson's work on apriority is that it's very difficult precisely to state the kinds of connections to empirical investigation that underwrite certain intuitions.)

The Fregean can say this: failure to infer according to logically valid inferences is a rational failure, whether or not the subject recognizes the inference as a logically valid one. This, combined with the intuitive verdicts (no rational failure) about Frege puzzle cases, implies Fregeanism, but does not require any thesis about the transparency of content. This seems to be to be the natural thing to say.

 

Edit: Aidan McGlynn tells me that John Campbell and Mark Sainsbury are on the record against (1) in Campbell's 'Is Sense Transparent?' and Sainsbury's is 'Fregean Sense' in his collection Departing From Frege. I'll be interested to read them.