Showing posts with label Jessica Brown. Show all posts
Showing posts with label Jessica Brown. Show all posts

Saturday, October 05, 2013

Jessica Brown on evidence and luminosity

In "Thought Experiments, Intuitions, and Philosophical Evidence," Jessica Brown introduces a problem for "evidence neutrality" deriving from Williamson's anti-luminosity arguments: evidence neutrality implies that if S has E as evidence, it is always possible for S's community to know that E is evidence, which entails the false claim that evidence is luminous. Sounds ok. Then she writes this puzzling passage:
We might wonder whether we could overcome this first problem by weakening the content element of evidence neutrality. Instead of claiming that if p is part of a subject’s evidence, then her community can agree that p is evidence, the relevant condition could be weakened to the claim that her community can agree that p is true. Although this revised version of the evidence-neutrality principle avoids Williamson’s objection that one is not always in a position to know what one’s evidence is, it faces an objection from Williamson’s anti-luminosity argument. Williamson claims to have established that no nontrivial condition is luminous, where a condition is luminous if and only if for every case a, if in a C obtains, then in a one is in a position to know that C obtains (2000, 95). There is not space here to assess the success of Williamson’s anti-luminosity argument. However, assuming that it is successful, it seems that no mere tinkering with the content element of evidence neutrality will suffice to defend it.
I'm just not seeing the problem here. The proposal we're considering is this: any time S has E as evidence, S (and/or S's community) is in a position to know that E is true. But this does not imply that any non-trivial condition is luminous. The claim that evidence is luminous would need knowledge that E is evidence on the right-hand side; the claim that truth is luminous would need no restriction to evidence on the left-hand side. Saying that evidence requires being a position to know truth looks wholly consistent with Williamson's luminosity argument. Indeed, setting aside the role of the community -- which as far as I can tell is idle in the argument Brown is considering -- it follows trivially from Williamson's own view, E=K. Notice that S's knowing that p entails that S is in a position to know that p is true; this is no violation of anti-luminosity.
Anybody see what I'm missing?

Friday, September 21, 2012

Very Crazy Beliefs and JPK

Following up on yesterday's post, here is another kind of case that Jessica pressed me about last summer. Again, I'm defending:
JPK: S’s belief is justified iff S has a possible counterpart, alike to S in all relevant intrinsic respects, whose corresponding belief is knowledge.
Sometimes the counterpart belief has a different content. As in the kind of case I discussed yesterday, Jessica is worried that this flexibility about content makes the view too liberal. We can put the worry this way: maybe my view has the implication that very crazy beliefs might end up justified, because they might be realized by intrinsic states that are consistent with knowledge in very different social environments. By a ‘very crazy’ belief, I mean a belief that is so implausible, it would usually be better to attribute to a subject who seemed to express it a deviant meaning. To take Burge’s famous example:
If a generally competent and reasonable speaker thinks that ‘orangutan’ applies to a fruit drink, we would be reluctant, and it would unquestionably be misleading, to take his words as revealing that he thinks he has been drinking orangutans for breakfast for the last few weeks. Such total misunderstanding often seems to block literalistic mental content attribution, at least in cases where we are not directly characterizing his mistake.

The reason it is usually preferable to attribute linguistic confusion instead of very crazy beliefs is that to attribute the very crazy belief would be to attribute a radically unjustified one. So it might be a problem if my view ended up saying that such beliefs are justified.

Suppose that Emily is at the breakfast table, drinking orange juice, and expressing what appears to be the very crazy belief that she is drinking orangutans. For example, she says, in a tone of voice not at all suggestive of joking, “I try to drink orangutans every morning, because they are high in vitamin C.” Does JPK have the implication that her very crazy belief is justified? Here is a line of reasoning that suggests that it does -- thanks again to Jessica for articulating it to me last summer. There is nothing intrinsic to Emily that guarantees that her linguistic community does not use ‘orangutan’ to refer to orange juice. Consider her intrinsic duplicate, Emily*, in a world where everybody else treats the word ‘orangutan’ as referring to orange juice. Emily* speaks and believes truly, expressing the belief that she is drinking orange juice. If Emily*’s belief constitutes knowledge, then JPK has it that Emily’s is justified.

Notice that for a JPK theorist to avoid this implication, it is not enough to point to a possible version of the case in which Emily*’s belief falls short of knowledge—this would be easy. According to JPK, Emily’s belief is justified if any one of her possible intrinsic duplicates has knowledge. So to avoid the conclusion that very crazy beliefs like Emily’s are justified, the JPK theorist must argue that all of Emily’s possible intrinsic duplicates fail to know.

Let us consider further what Emily and Emily* must be like; further details of the case will be relevant. Part of what makes Emily’s belief so very crazy is that she is so out of touch with her linguistic community. On the most natural way of filling out the case, Emily sometimes encounters uses of the word ‘orangutan’ that strike her as very strange. Since she thinks that orangutans are a kind of fruit juice, she doesn’t really know what to make of the plaque that says ‘orangutans’ at the zoo, next some Asian primates. That sign seems to Emily to suggest that these apes are orange juice! Perhaps she thinks it is an error, or a prank. Or maybe it’s a sign not for the exhibit, but for some unseen orange juice nearby. On this most natural way of understanding the case, Emily has lots of evidence that her way of understanding ‘orangutan’ is wrong. This evidence impacts her internal state; all of her intrinsic duplicates also have evidence suggesting that ‘orangutan’ doesn’t mean what she thinks it does. There is, I suggest, every reason to consider this evidence to be a defeater to those duplicates’ knowledge. Even though Emily* expresses a truth when she says to herself, “orangutans are a kind of fruit juice,” she has lots of evidence that this is false. That evidence prevents Emily* from knowing; so JPK need not say that Emily’s very crazy belief is justified.

However, the analysis of the previous paragraph relied on a particular interpretation of the case. Although it is, I maintain, the most natural one, it is not the only possible one. What if we suppose that Emily has no internal evidence against the correctness of her bizarre use of ‘orangutan’? In this case, Emily* will have no defeater; might she therefore have knowledge? It depends, I think, on how each came to have their way of using the term. Suppose that, although ‘orangutan’ functions as it actually does in Emily’s linguistic community, she has never been taught this term. She spontaneously decided, for no particular reason, to use the term to refer to orange juice, and it’s just a coincidence that it happens to be the same word as that used in the wider community for orangutans. We can suppose that she’s encountered the word from time to time, but in impoverished contexts which provide no reason to suspect that her usage is incorrect. For example, she sometimes overhears people saying “I like orangutans”, without the additional context that would cue her into supposing this to be anything other than an expression of esteem for orange juice. (We include this limited contact to make it plausible that she really is using the public term.) In this case, Emily has formed beliefs about the meaning of a public term rather irresponsibly; this fact will be reflected in her intrinsic state. So Emily*, too, will have come irresponsibly to believe that “orangutan” means orange juice; even though her belief is true, it is not knowledge. So on this version of the case, too, the JPK theorist can avoid attributing justified belief to Emily.

What if, instead, we suppose that Emily thinks that orangutans are orange juice because of misleading evidence to the effect that ‘orangutan’ means orange juice? Her older brother, perhaps, decided it’d be funny to teach her that as a gullible child, and she’s never encountered evidence to the contrary. Now Emily’s belief looks like it might well have been formed responsibly. So there is no obvious obstacle to suggesting that Emily* has knowledge. For in this case, it looks like JPK will suggest that Emily’s belief, very crazy though its content is, is justified after all. This strikes me as the correct result; it is a familiar instance of a false belief that is justified on the proper basis of misleading evidence.

So it seems to me that JPK does not have problematically liberal implications about the justification of very crazy beliefs. For many of the most plausible version of very crazy beliefs, they will come along with intrinsic features that are inconsistent with knowledge; for those that do not, it is intuitively plausible to attribute justified belief.

Thursday, September 20, 2012

Slow Switch, JPK, and Validity

I've been working for a while on a paper defending this knowledge-first approach to doxastic justification:
JPK: S’s belief is justified iff S has a possible counterpart, alike to S in all relevant intrinsic respects, whose corresponding belief is knowledge.
One of the moves I often use in that paper involves an exploitation of content externalism. My belief is justified if my intrinsically identical counterpart's is knowledge -- but his knowledge needn't have the same content as my justified belief.

When I presented this paper in St Andrews last summer, Jessica Brown expressed the worry that the kind of externalism I was relying on made the view too liberal. She was worried that there'd be cases of intuitively unjustified beliefs, even though there were intrinsically identical counterparts who had knowledge. One possible family of such cases involves 'slow switch' subjects -- people who change in their external environments in a way that implies a change in content, without realizing it. Suppose someone commits the fallacy of equivocation because she has been slow-switched; won't JPK have the implication that her resultant belief is justified anyway? I think that JPK probably will have this implication. But I don't think it's intuitively the wrong one. Consider a case. This example is borrowed from Jessica's book.

Sally starts out a regular earth-person, who knows some stuff about aluminum, but not much detail about its chemical composition. She does know, however, that some of it is mined in Australia, so she says to herself:

(1) Aluminum is mined in Australia.

One night, unbeknownst to Sally, she is moved to Twin Earth, where there is no aluminum, but instead, some superficially similar stuff twaluminum. After a while there, Sally has thoughts about twaluminum. In particular, she comes to know that some of it is mined in the Soviet Union. This is the belief she expresses when she says to herself:

(2) Aluminum is mined in the Soviet Union.

She hasn't forgotten her previous knowledge, so she still knows (1). And she's unaware that "aluminum" in (1) refers to different stuff than does "aluminum" in (2). So she might well put these together and infer her way to a claim she'd express by:

(3) Something is mined in Australia and the Soviet Union.

Sally equivocates on "aluminum", because of her external environment. Intrinsically, she's just like her counterpart who stays on regular earth and comes to know that aluminum is mined in both places, and (validly) concludes that something is mined in both places. So JPK has it that Sally's conclusion, (3), expresses knowledge, even though it is the product of equivocation.

I think, however, that this is exactly the right result. It is intuitively plausible to suppose that Sally's appearances are misleading, but her belief is justified. This mean that invalid reasoning doesn't always interfere with the justification of beliefs. But that's what we should think about this case, independently of JPK.

I'd be interested to hear any thoughts from any readers, but especially answers to these:

(a) Do you agree that it is intuitive to describe Sally's conclusion (3) as justified?
(b) Do you see any other, more problematic implications for JPK that derive from slow-switch considerations?

Wednesday, February 01, 2012

Rationality and Fregean Content

I haven't been updating my blog since moving to UBC last fall, partly because I've been busy preparing new courses and grant applications and settling into a new city. (My two biggest professional bits of news over the last while, for anyone interested who hasn't already heard elsewhere, are that The Rules of Thought, my book with Ben Jarvis, is now under contract with OUP, and I'll be beginning an Assistant Professorship at UBC this summer.)

I'm now starting to shift back into research mode, however, and blog activity may come back up accordingly.

One of the philosophy books that has been on my 'to-read' list for a long time is Jessica Brown's Anti-Individualism and Knowledge; I've been interested in the relationship between mental content and epistemology for a while now. Of course if I'd been cleverer about it, I'd've read the book while I worked at St Andrews and spoke to Jessica regularly, but: better late than never.

Among the interesting things Jessica is up to in her book is an argument that Fregeanism about content is inconsistent with -- or at least, fits poorly with -- anti-individualism. This is the negation of one of the chapters of The Rules of Thought, so I wanted to attend especially to the argument. (Thanks to Sandy Goldberg for bringing this connection to my attention recently.)

One of Jessica's arguments boils down to this. (I'm looking at pp. 200-201.)

  1. Fregean sense depends for its motivation on the transparency of sameness of mental content.

  2. Anti-individualism is inconsistent with the transparency of sameness of mental content.

  3. Therefore, if anti-individualism is true, then Fregean sense is unmotivated.


In defense of (1), Jessica suggests that, were it possible for a subject to be wrong about whether two token concepts express the same content, the failure to make logically valid inferences would be consistent with full rationality. Celeste is in a Frege case.
Celeste fails to make the simple valid inference ... since she does not realize that the relevant thought constituents have the same content and thus that the inference is valid. Further, she can come to the correct view only by using empirical information. On this view, her failure to make the simple valid inference does not impugn her rationality, for even a rational subject would fail to make a valid inference that she does not realize is valid.

Jessica suggests that Fregeanism is motivated by the possibility of rationally holding what would be according to non-Fregean views contradictory sets of beliefs, or rationally declining to infer according to what such views would say are logically valid inferences. I agree -- a central motivation for Fregeanism is to explain why there's nothing irrational about believing Hesperus to be F and believing Phosphorus not to be F. But why does this rely on the assumption of the transparency of sameness of content? Jessica says in the passage above that there is an alternate explanation available, if transparency is denied: one doesn't make what is in fact a logically valid inference because one doesn't realize that it is valid, and this is consistent with full rationality.

Jessica's argument seems to rely on this claim:

(Reflection) If a subject doesn't realize that an inference is valid, then she faces no rational pressure to make it.

But Reflection strikes me as a pretty dubious principle in generality. Suppose somebody is pretty dense, and fails to realize that modus tollens is a valid inference form, and so fails to realize that various instances of it are valid. She sits there and thinks if it has an even number, then it's red and it's not red, and finds herself with no inclination to infer it has no even number. Surely her ignorance doesn't excuse her rational failure. So Reflection is false in generality; so arguments that rely on Reflection are unsound. It looks to me like Jessica is relying on Reflection, so I think her argument is unsound.

That said, there is admittedly an intuitive difference between my dense character and Jessica's ignorant one -- Jessica's character's failure to infer in accordance with valid inferences would be corrected by suitable empirical information; mine presumably wouldn't. Could this motivate a weakening of Reflection to render Jessica's verdict while avoiding the problematic one? Maybe, but it looks to me like it'd end up pretty ad hoc. (One upshot of Timothy Williamson's work on apriority is that it's very difficult precisely to state the kinds of connections to empirical investigation that underwrite certain intuitions.)

The Fregean can say this: failure to infer according to logically valid inferences is a rational failure, whether or not the subject recognizes the inference as a logically valid one. This, combined with the intuitive verdicts (no rational failure) about Frege puzzle cases, implies Fregeanism, but does not require any thesis about the transparency of content. This seems to be to be the natural thing to say.

 

Edit: Aidan McGlynn tells me that John Campbell and Mark Sainsbury are on the record against (1) in Campbell's 'Is Sense Transparent?' and Sainsbury's is 'Fregean Sense' in his collection Departing From Frege. I'll be interested to read them.

Thursday, October 28, 2010

Knowledge Norms and Pragmatic Encroachment

I'm thinking a bit more today about the point I made in a post yesterday about the use of intuitions about cases to evaluate knowledge norms. That point was basically that facts about whether S knows p and whether S is well-enough situated epistemically in order appropriately to X don't by themselves say anything about the knowledge norm of practical reasoning; S may know p without X's being appropriate just by virtue of p's not being a good enough reason to X. Yesterday I used this observation to rebut a certain kind of argument against knowledge norms.

I now think that, in addition to that use, this observation undercuts certain implications sometimes drawn from knowledge norms. In particular, I think it points to a lacuna in one of the central arguments of Fantl and McGrath's defense of pragmatic encroachment. I think this is a pretty fair reconstruction of their §4.3:

  1. KJ: If you know that p, then p is warranted enough to justify you in phi-ing, for any phi.

  2. Consider some low-stakes action, X, for which LOW has q as a sufficient reason; LOW can appropriately perform X. (Their example: Matt knows the train is a local; this justifies his boarding it.)

  3. There is a possible counterpart of LOW in the same epistemic position but with higher stakes, HIGH, such that HIGH cannot appropriately perform X. (Their example: Jeremy really needs to get off at a local stop; the stakes are too high for him to risk boarding.)

  4. So q is not well-enough warranted to justify X in phi-ing.

  5. So HIGH does not know q.

  6. So two subjects in the same epistemic position can differ with respect to knowledge of q.

  7. So purism is false.


(For maximum faithfulness to F&M's broader intentions, we should understand this entire argument, including its conclusion, as being offered under the conditional assumption that fallibilism is true -- that it's possible to know that p, even though there is some epistemic possibility that not-p.)

I'm concerned with the move from (3) to (4). It has pretty much the same form as the arguments sketched above, except that it's holding different bits fixed. Jessica Brown argued from what she saw as intuitive verdicts about knowledge and appropriate action against the knowledge norm; Fantl and McGrath argue from intuitive verdicts about appropriate action and the principle of the knowledge norm against the knowledge verdict Brown found intuitive. My argument shows the flaw in both of these arguments. (In a sense, they are instances of the same argument, run in different directions.)

The move from (3) to (4) relies, like Brown's surgeon case argument, on the assumption that the knowledge of q, the non-actionability of X, and the knowledge norm are incompatible. But as I've shown, they're not. They're incompatible only on the assumption that q is, if possessed as a reason, a good enough reason to X. And there just aren't obvious intuitive verdicts about facts like these; neither are there clear theories that dictate which claims like these to accept. It's obvious enough whether S knows q, and whether S would be justified in Xing -- but whether q itself is well-enough justified to be among the reasons S has for Xing is an esoteric question on which naive intuitions are silent.

Everybody who accepts (2) and (3) has to think there's some important difference that derives from a change in the stakes that bears on actionability. But you can think this without giving up on KJ, fallibilism, or purism, if you want to. You can say that what propositions are good enough reasons to X depends in part on the stakes. That is: whether p is known is stakes-independent; so too is whether p is warranted enough to be a reason for phi-ing, for any phi. What varies by stakes is whether p, supposing that it is a reason, is by itself a good enough reason to phi. When the stakes are high, p, though still genuinely a reason, isn't a good enough reason. In lower stakes, p is a good enough reason. Insensitive knowledge; insensitive reasons for action; sensitive needs of actions for reasons.

This does look to me like a significant and substantive gap in F&M's argument. I'm generally pretty sympathetic to F&M-style views -- I'm a different sort of contextualist than those who are motivated by 'intellectualism' -- but this does look to me to be a potentially promising avenue for resisting pragmatic encroachment from F&M-style arguments.

It will not obviously help, however, with the knowledge intuitions about bank cases and the like. Lots of pragmatic encroachment people seem to be turning their emphasis away from these cases recently, in favor of broader theoretical arguments. Certainly, this seems to be one of F&M's aims. I rather suspect, though, that the pragmatic encroachment theorist may end up needing to rely on judgements about cases more than they think. (I don't know that that's a problem.)

Wednesday, October 27, 2010

Knowledge Norms and Intuitions about Cases

Here's a boring thought experiment that doesn't demonstrate anything.
Smith burgled the house last night; Detective Stanley is investigating the crime scene. He acquires evidence sufficient for knowledge that the burglar came in through the window, but finds very little evidence about whether it was Smith or someone else who committed the crime.

Here are two intuitive verdicts that aren't in any tension at all:

  1. Stanley knows that the burglar came in through the window.

  2. Stanley would need to have more evidence in order for it to be appropriate for him to arrest Smith.


Everybody can accept these obvious claims. In particular, these obvious claims are in no tension with the knowledge norm of practical reasoning, which claims that p can be an appropriate reason for action for S if and only if S knows that p. It would be an anemic objection to the knowledge norm to point out that Stanley knows that the burglar used the window, but needs more evidence in order for it to be appropriate to arrest Smith. That the burglar used the window just isn't a strong enough reason to arrest Smith, so this case doesn't tell us anything about what is and is not a reason. So it doesn't tell us anything about knowledge norms.

The moral of the story is that claims about who knows what, and about what actions are inappropriate, are in general insufficient to refute the knowledge norm of practical reasoning. (So, mutatis mutandis, for the knowledge norm of assertion.)

When you look at the case given above, this moral is really obvious. But sometimes, I think, it's neglected. Jessica Brown, for instance, argues against the knowledge norm of practical reasoning by citing this case:
A student is spending the day shadowing a surgeon. In the morning he observes her in clinic examining patient A who has a diseased left kidney. The decision is taken to remove it that afternoon. Later, the student observes the surgeon in theatre where patient A is lying anaesthetised on the operating table. The operation hasn’t started as the surgeon is consulting the patient’s notes. The student is puzzled and asks one of the nurses what’s going on:

Student: I don’t understand. Why is she looking at the patient’s records? She was in clinic with the patient this morning. Doesn’t she even know which kidney it is? Nurse: Of course, she knows which kidney it is. But, imagine what it would be like if she removed the wrong kidney. She shouldn't operate before checking the patient’s records.

We have, as before, a pair of intuitive verdicts: one attributing knowledge, and another denying appropriateness of action. Brown considers this to be a counterexample to the knowledge norm of practical reasoning, but the case of the burglar shows that this cannot not enough. Just as the burglar argument was transparently invalid, because the burglar's use of the window wouldn't be sufficient reason for arresting Smith, Brown's argument is valid only on the assumption that the disease in the left kidney would be a sufficient reason for operating without checking the charts. But Brown has given us no reason to think that is so. It's entirely open to the defender of the knowledge norm to argue that knowledge of p is sufficient for p to be a reason, but that in this case, p isn't a good enough reason for action.

This strategy is always available. I think this shows that trading in intuitions about who knows what, and who ought to do what, is not a helpful strategy for evaluating knowledge norms.