Friday, September 28, 2012

New version of JPK; Thought Blog

Just a quick note to anyone who might be interested that I have posted a revised version of my paper, "Justification is Potential Knowledge," to my works in progress page. I expect to be submitting it in the next month or so, so if you have any comments or ideas, I'd very much welcome them.

Also, Thought now has a new blog devoted to discussion of articles that appear in that journal. So far, Brian Weatherson has written a thoughtful post about my paper, "Knowledge Norms and Acting Well." Check it out here.

Friday, September 21, 2012

Very Crazy Beliefs and JPK

Following up on yesterday's post, here is another kind of case that Jessica pressed me about last summer. Again, I'm defending:
JPK: S’s belief is justified iff S has a possible counterpart, alike to S in all relevant intrinsic respects, whose corresponding belief is knowledge.
Sometimes the counterpart belief has a different content. As in the kind of case I discussed yesterday, Jessica is worried that this flexibility about content makes the view too liberal. We can put the worry this way: maybe my view has the implication that very crazy beliefs might end up justified, because they might be realized by intrinsic states that are consistent with knowledge in very different social environments. By a ‘very crazy’ belief, I mean a belief that is so implausible, it would usually be better to attribute to a subject who seemed to express it a deviant meaning. To take Burge’s famous example:
If a generally competent and reasonable speaker thinks that ‘orangutan’ applies to a fruit drink, we would be reluctant, and it would unquestionably be misleading, to take his words as revealing that he thinks he has been drinking orangutans for breakfast for the last few weeks. Such total misunderstanding often seems to block literalistic mental content attribution, at least in cases where we are not directly characterizing his mistake.

The reason it is usually preferable to attribute linguistic confusion instead of very crazy beliefs is that to attribute the very crazy belief would be to attribute a radically unjustified one. So it might be a problem if my view ended up saying that such beliefs are justified.

Suppose that Emily is at the breakfast table, drinking orange juice, and expressing what appears to be the very crazy belief that she is drinking orangutans. For example, she says, in a tone of voice not at all suggestive of joking, “I try to drink orangutans every morning, because they are high in vitamin C.” Does JPK have the implication that her very crazy belief is justified? Here is a line of reasoning that suggests that it does -- thanks again to Jessica for articulating it to me last summer. There is nothing intrinsic to Emily that guarantees that her linguistic community does not use ‘orangutan’ to refer to orange juice. Consider her intrinsic duplicate, Emily*, in a world where everybody else treats the word ‘orangutan’ as referring to orange juice. Emily* speaks and believes truly, expressing the belief that she is drinking orange juice. If Emily*’s belief constitutes knowledge, then JPK has it that Emily’s is justified.

Notice that for a JPK theorist to avoid this implication, it is not enough to point to a possible version of the case in which Emily*’s belief falls short of knowledge—this would be easy. According to JPK, Emily’s belief is justified if any one of her possible intrinsic duplicates has knowledge. So to avoid the conclusion that very crazy beliefs like Emily’s are justified, the JPK theorist must argue that all of Emily’s possible intrinsic duplicates fail to know.

Let us consider further what Emily and Emily* must be like; further details of the case will be relevant. Part of what makes Emily’s belief so very crazy is that she is so out of touch with her linguistic community. On the most natural way of filling out the case, Emily sometimes encounters uses of the word ‘orangutan’ that strike her as very strange. Since she thinks that orangutans are a kind of fruit juice, she doesn’t really know what to make of the plaque that says ‘orangutans’ at the zoo, next some Asian primates. That sign seems to Emily to suggest that these apes are orange juice! Perhaps she thinks it is an error, or a prank. Or maybe it’s a sign not for the exhibit, but for some unseen orange juice nearby. On this most natural way of understanding the case, Emily has lots of evidence that her way of understanding ‘orangutan’ is wrong. This evidence impacts her internal state; all of her intrinsic duplicates also have evidence suggesting that ‘orangutan’ doesn’t mean what she thinks it does. There is, I suggest, every reason to consider this evidence to be a defeater to those duplicates’ knowledge. Even though Emily* expresses a truth when she says to herself, “orangutans are a kind of fruit juice,” she has lots of evidence that this is false. That evidence prevents Emily* from knowing; so JPK need not say that Emily’s very crazy belief is justified.

However, the analysis of the previous paragraph relied on a particular interpretation of the case. Although it is, I maintain, the most natural one, it is not the only possible one. What if we suppose that Emily has no internal evidence against the correctness of her bizarre use of ‘orangutan’? In this case, Emily* will have no defeater; might she therefore have knowledge? It depends, I think, on how each came to have their way of using the term. Suppose that, although ‘orangutan’ functions as it actually does in Emily’s linguistic community, she has never been taught this term. She spontaneously decided, for no particular reason, to use the term to refer to orange juice, and it’s just a coincidence that it happens to be the same word as that used in the wider community for orangutans. We can suppose that she’s encountered the word from time to time, but in impoverished contexts which provide no reason to suspect that her usage is incorrect. For example, she sometimes overhears people saying “I like orangutans”, without the additional context that would cue her into supposing this to be anything other than an expression of esteem for orange juice. (We include this limited contact to make it plausible that she really is using the public term.) In this case, Emily has formed beliefs about the meaning of a public term rather irresponsibly; this fact will be reflected in her intrinsic state. So Emily*, too, will have come irresponsibly to believe that “orangutan” means orange juice; even though her belief is true, it is not knowledge. So on this version of the case, too, the JPK theorist can avoid attributing justified belief to Emily.

What if, instead, we suppose that Emily thinks that orangutans are orange juice because of misleading evidence to the effect that ‘orangutan’ means orange juice? Her older brother, perhaps, decided it’d be funny to teach her that as a gullible child, and she’s never encountered evidence to the contrary. Now Emily’s belief looks like it might well have been formed responsibly. So there is no obvious obstacle to suggesting that Emily* has knowledge. For in this case, it looks like JPK will suggest that Emily’s belief, very crazy though its content is, is justified after all. This strikes me as the correct result; it is a familiar instance of a false belief that is justified on the proper basis of misleading evidence.

So it seems to me that JPK does not have problematically liberal implications about the justification of very crazy beliefs. For many of the most plausible version of very crazy beliefs, they will come along with intrinsic features that are inconsistent with knowledge; for those that do not, it is intuitively plausible to attribute justified belief.

Thursday, September 20, 2012

Slow Switch, JPK, and Validity

I've been working for a while on a paper defending this knowledge-first approach to doxastic justification:
JPK: S’s belief is justified iff S has a possible counterpart, alike to S in all relevant intrinsic respects, whose corresponding belief is knowledge.
One of the moves I often use in that paper involves an exploitation of content externalism. My belief is justified if my intrinsically identical counterpart's is knowledge -- but his knowledge needn't have the same content as my justified belief.

When I presented this paper in St Andrews last summer, Jessica Brown expressed the worry that the kind of externalism I was relying on made the view too liberal. She was worried that there'd be cases of intuitively unjustified beliefs, even though there were intrinsically identical counterparts who had knowledge. One possible family of such cases involves 'slow switch' subjects -- people who change in their external environments in a way that implies a change in content, without realizing it. Suppose someone commits the fallacy of equivocation because she has been slow-switched; won't JPK have the implication that her resultant belief is justified anyway? I think that JPK probably will have this implication. But I don't think it's intuitively the wrong one. Consider a case. This example is borrowed from Jessica's book.

Sally starts out a regular earth-person, who knows some stuff about aluminum, but not much detail about its chemical composition. She does know, however, that some of it is mined in Australia, so she says to herself:

(1) Aluminum is mined in Australia.

One night, unbeknownst to Sally, she is moved to Twin Earth, where there is no aluminum, but instead, some superficially similar stuff twaluminum. After a while there, Sally has thoughts about twaluminum. In particular, she comes to know that some of it is mined in the Soviet Union. This is the belief she expresses when she says to herself:

(2) Aluminum is mined in the Soviet Union.

She hasn't forgotten her previous knowledge, so she still knows (1). And she's unaware that "aluminum" in (1) refers to different stuff than does "aluminum" in (2). So she might well put these together and infer her way to a claim she'd express by:

(3) Something is mined in Australia and the Soviet Union.

Sally equivocates on "aluminum", because of her external environment. Intrinsically, she's just like her counterpart who stays on regular earth and comes to know that aluminum is mined in both places, and (validly) concludes that something is mined in both places. So JPK has it that Sally's conclusion, (3), expresses knowledge, even though it is the product of equivocation.

I think, however, that this is exactly the right result. It is intuitively plausible to suppose that Sally's appearances are misleading, but her belief is justified. This mean that invalid reasoning doesn't always interfere with the justification of beliefs. But that's what we should think about this case, independently of JPK.

I'd be interested to hear any thoughts from any readers, but especially answers to these:

(a) Do you agree that it is intuitive to describe Sally's conclusion (3) as justified?
(b) Do you see any other, more problematic implications for JPK that derive from slow-switch considerations?

Thursday, September 13, 2012

Cappelen on Explaining Away Intuitions

The 6-page Chapter 5 of Herman Cappelen's Philosophy Without Intuitions is an argument against "Explain":
Explain: Suppose A has shown (or at least provided good arguments in favor of) not-p. If many of A's interlocutors (and maybe A herself) are inclined to sincerely utter (and so commit to) 'Intuitively, p', then A is under an intellectual obligation to explain why this is the case (i.e. why there was or is this inclination to utter and commit to 'Intuitively, p'). She should not full-out endorse not-p before she has discharged this obligation.
(The metasemantic character of this principle is because Herman thinks that 'intuitively' is context-sensitive, and that this is the only way to capture the attitude in its generality.) Against Explain, Herman considers various things that people might mean by sentences like "intuitively, p", and suggests that, for each of them, Explain looks pretty unmotivated. For example, when considering the idea that it means something like "at the outset of inquiry we believed or were inclined to believe that p", he writes:
When 'Intuitively, p' is so interpreted, it is hard to see any reason to accept Explain. Suppose a philosopher A has presented a good argument for not-p. The fact that some judge or are inclined to judge that p before thinking carefully about the topic isn't something that in general needs to be explained by A. The question under discussion is whether p is the case. The argument for not-p addressed and settled that question. (90)
As with so much of Herman's book, I'm in agreement with the main thrust here. In a paper I wrote on this topic a little while back ("Explaining Away Intuitions"), I said, along very similar lines to Herman's, that:

Widespread practice notwithstanding, it is not prima facie obvious why philosophers should, in general, be concerned with explaining intuitions, or with explaining them away. Intuitions are psychological entities; philosophical theories are not, in general, psychological theories. Ontologists theorize about what there is; it is quite another matter, one might think, what people think there is. Epistemologists concern themselves with knowledge, not with folk intuitions about knowledge.
And:
If I’m to theorize about, say, the nature of reference, I should not feel at all guilty if I fail to explain why people like chocolate, or why the Detroit Lions are so bad. Why should I feel differently about the fact that some people think that in Kripke’s story, the name ‘Gödel’ refers to Schmidt? Th is psychological fact is interesting, and is, it seems to me, well worth explaining. But it is not clear why it should be the reference theorist’s job to explain it. His job is to explain reference, not to explain intuitions about reference.

Obviously, with respect to these passages, Herman and I are very much on the same page. However, I went on in that paper to make a major caveat, which I think Herman may be overlooking. Sometimes, considerations having to do with intuitions are relevant to the nonpsychological question, too. This may be so even if the evidence doesn't derive from intuition in any interesting sense. (In other words, I don't think that the plausible version of Explain relies on the truth of anything like Centrality.) I agree with Herman that if you have an argument that you recognize to be conclusive for a given philosophical thesis, then you don't have to worry too much about other people's intuitions. But sometimes you need to worry about those intuitions in order to be able to recognize that an argument is conclusive. Recognizing that an argument is a good one is a cognitive achievement, and intuitions can be relevant for whether this achievement is met. They might, for example, defeat one's justification for a given premise.

This observation isn't inconsistent with the letter of Herman's remarks in Chapter 5. In a footnote, he clarifies that his scope is limited: "[t]o make things simple, I assume that we have settled that not-p (or at elast made it sufficiently plausible for us to endorse it) and that all that remains a sa potential obstacle is the commitment to 'Intuitively, p'." But this isn't always—or, in the interesting cases, even very often—the case. As I wrote in my paper mentioned above:

Sometimes, for instance, a philosopher may be deliberating about a particular view, without being at all sure what to think. I find in myself conflicting intuitions, and don’t know which to endorse. If I can see that one of those intuitions is a member of a class that I’m likely to find appealing even if false, this might provide me with some reason to prefer the other. The Horowitz case provides a nice example: if I am in internal tension between (a) the thought that it’s better to do that which results in more lives being saved, and (b) the thought that it’s wrong to kill somebody in a way over and above the way it’s wrong to let somebody die, I may, if I’m convinced by her explaining‐away, discount (b) as the product of a general error in rationality.
So I think that by focusing on the cases where one has already identified a conclusive argument, Herman is probably not looking at the best candidates for situations in which consideration of intuitions might be interesting.

Wednesday, September 12, 2012

Cappelen on Intuition and Philosophical Exceptionalism

I'm reading through Herman Cappelen's Philosophy Without Intuitions again, trying to settle on a few discussion points to pull out for a review. The book is an extended argument against 'Centrality', the thesis that
[c]ontemporary analytic philosophers rely on intuitions as evidence (or as a source of evidence) for philosophical theories. (3)
Centrality, Herman says, is a widely-held misconception of philosophy, which has confused quite a lot of metaphilosophical theorizing, but hasn't had much effect on first-order philosophical argumentation. I'm broadly sympathetic to this conclusion, but there are a few respects in which I suspect Herman's argument might be too quick. I'll probably blog about several of them; here is one.

When clarifying the target of his critique, Herman specifies that the interesting version of Centrality should be interpreted as applying distinctively to philosophy, as opposed to other disciplines:
Since Centrality is a claim about what is characteristic of philosophers, it should not be construed as an instantiation of a universal claim about all intellectual activity or even a very wide domain of intellectual activity. Suppose that all human cognition (or a very wide domain of intellectual life) appeals to intuitions as evidence, from which we can derive as a special instance that philosophers appeal to intuitions as evidence. Such a view would not vindicated Centrality, since according to Centrality the appeal to intuitions as evidence is meant to differentiate philosophy—and, perhaps, a few other kindred disciplines—from inquires into the migration patterns of salmon or inflation in Argentina, say. If it turns out that the alleged reliance on intuitions is universal or extends far beyond philosophy and other allegedly a priori disciplines, that would undermine Centrality as it is construed in this work. ... As a result, it will be crucial when evaluating an argument for the significance of intuitions to keep track of its scope. An argument that shows that all intellectual activity relies on intuitions as evidence, and then derives Centrality as a corollary, will not be acceptable given how Centrality is presented by its proponents. (16)
I think that Herman is overlooking the following possibility, which is worthy of consideration: evidential reliance on intuition is ubiquitous, and not distinctive of philosophy. However, philosophy is unusual in that (a) a higher proportion of the interesting action involves the contribution of intuition than it tends to in other fields, and (b) in some canonical instances of philosophy, intuition provides all the relevant evidence. According to the picture I'm thinking of, intuition is one important source of evidence everywhere, and it's playing a particularly interesting role in philosophy. If this were true, I think it would vindicate an interesting version of Centrality, and one that makes a reckoning with the epistemic significance of intuition a pressing issue for philosophers, even though it did not claim that intuition is not a source of evidence in other realms. I'm inclined to interpret at least many of those philosophers who do emphasize the role of intuitions in philosophy as thinking in something like this way.

I do think the view sketched here is wrong; establishing this is one of the central aims of my forthcoming book with Ben Jarvis. But I don't think Herman is right to leave it off of the conceptual map.