I guess I ought to make a point of announcing on my blog that my book with Ben Jarvis has finally been published. (That's UK only at the moment; it'll be worldwide later this month.) Here's OUP's official page. There were some delays with the printing that pushed things back a month or two longer than expected, but on the whole, I'm very pleased with the experience we had with OUP.
One of the central themes in The Rules of Thought is that there is an important sense in which the norms of rationality are objective: they apply universally to all thinkers, regardless of the subjects' rational limitations. We're motivated to say this in significant part because of cases of 'blind irrationality' -- situations where subjects fail with respect to rationality, in a way in which no intuitions or inclinations alert them to their error. There is an important sense in cases like this in which these are genuine failures of rationality; therefore, in this sense, the demands of rationality are not relative to the subject's limitations. The subject is doing the best he can do, given his limitations. This motivates us to the fairly strong view that for 'rational necessities' -- roughly, those truths that someone might want to call 'analytic' or 'conceptual' or maybe 'a priori' -- subjects always have conclusive reason to accept them. That is, everyone always has propositional justification for all of these truths. For example, my grandmother has propositional justification for every arithmetical truth. This is surprising to many, but Ben and I have arguments in the book that I think really do show that this has to be right.
But there is a puzzle that comes from this way of setting things out. It's a puzzle that's brought out really nicely in this new paper by Sharon Berry, "Default Reasonableness and the Mathoids." (Her target isn't views like ours, but the intuitions she's trading with are poignant for us.) We think that complex arthmetic truths -- Fermat's Last Theorem, for instance -- are always propositionally justified in the same way that simple ones -- 1+4=5, for instance -- are. But if this is right, it's not straightforward to make sense of the intuition that simple arithmetical premises are legitimate starting places in proofs in a way that complex arithmetical premises are not. Berry's central thought experiment involves the Mathoids, who find Fermat's Last Theorem (or other complex truths) immediately, primitively compelling in the way that we find simpler truths. She observes that it's intuitive that their proof method is unjustified, but argues that it's hard to find a principled difference between them and us. This seems to me exactly right.
The solution Ben and I have been thinking about -- this is discussed briefly in the book, and at greater length in a paper we're working on -- is that we need a 'hybrid' epistemology. We're still convinced by the arguments mentioned above that there must be an important epistemic element that doesn't depend on any of the contingencies of human psychology. To this extent, we're committed to the falsity of a thoroughgoingly virtue epistemology according to which epitsemic competences are the only fundamental epistemically normative element in town. But the virtue story also has something importantly right: the story we tell about propositional justification can't be the whole story either. The messier, psychology-laden doxastic justification story can't be given entirely in terms of propositional justification. We need to say something more contingent, about how the belief is formed. Virtue epistemology seems promising -- we want to say that there are virtuous traits and vicious traits, with respect to believing what is propositionally justified, and that these make a difference for doxastic justification and knowledge. The challenge, of course, is to describe in what these virtues consist. (If you want to distinguish us from the Mathoids, reliabilism is a clear non-starter.)
We have some tentative ideas for how this might go, but I'll save them for later. The main point of this post is to suggest that there's strong reason to think we might need two independent stories here: one for 'pure' epistemic statuses like propositional justification (the minimalist story that is a main theme in our book) and one for 'impure' epistemic statuses that bring in the psychology (the virtue story we're thinking about now). Neither will do on its own.
Showing posts with label justification. Show all posts
Showing posts with label justification. Show all posts
Thursday, July 04, 2013
Thursday, November 15, 2012
Williamson on Apriority
Here's an argument with the conclusion that there's no deep difference between cats and dogs.
In his contribution to Al Casullo and Josh Thurow's forthcoming volume, The A Priori in Philosophy, Timothy Williamson argues against the theoretical significance of the distinction between the a priori and a posteriori. The thesis of the paper is that "although a distinction between a priori and a posteriori knowledge (or justification) can be drawn, it is a superficial one, of little theoretical interest."
It's a somewhat puzzling paper, I think, because it's not at all clear how it's broad argumentative strategy is supposed to support the conclusion. Williamson does not, for instance, articulate what he takes the apriority distinction to be, then argue that it is theoretically uninteresting. Instead, he identifies certain paradigms of a priori and a posteriori knowledge, then emphasizes various similarities between them. For example, he argues that the cognitive mechanisms underwriting certain a priori judgments are similar in various respects to those that underwrite certain a posteriori judgments. Then he spends most of the rest of the paper arguing that these are not idiosyncratic features of his particular examples. But why is this supposed to be relevant?
Williamson writes:
That Williamson's argument needs to be treated very carefully should also be evident from the fact that prima facie, it looks like it has enough teeth to show that the distinction between knowledge and false belief is not an epistemically deep one—a conclusion that everyone, but Williamson most of all, should reject. For the cognitive processes underlying cases of knowledge are often almost exactly similar to those underlying false beliefs. Should this tempt us to ask how, then, there could be a deep epistemological difference between them? I really don't see why.
The Dogs and Cats Argument. Although a distinction between cats and dogs can be drawn, it turns out on closer examination to be a superficial one; it does not cut at the biological joints. Consider, for example, a paradigmatic cat, Felix. Felix has the following properties: (i) he has four legs, fur, and a tail; (ii) he eats canned food out of a bowl; (iii) humans like to stroke his back. Now consider a paradigmatic dog, Fido. Fido has all three of these properties as well. For instance, Fido also has four legs, and fur, and a tail, and when he eats, it is often served from a can into a bowl. And humans like to stroke Fido's back, too. In these respects, Fido and Felix are almost exactly similar. Therefore, there can't possibly be any deep biological distinction between them.I'm sure you'll agree that the dogs and cats argument is terrible. Put a pin in that and consider another argument.
In his contribution to Al Casullo and Josh Thurow's forthcoming volume, The A Priori in Philosophy, Timothy Williamson argues against the theoretical significance of the distinction between the a priori and a posteriori. The thesis of the paper is that "although a distinction between a priori and a posteriori knowledge (or justification) can be drawn, it is a superficial one, of little theoretical interest."
It's a somewhat puzzling paper, I think, because it's not at all clear how it's broad argumentative strategy is supposed to support the conclusion. Williamson does not, for instance, articulate what he takes the apriority distinction to be, then argue that it is theoretically uninteresting. Instead, he identifies certain paradigms of a priori and a posteriori knowledge, then emphasizes various similarities between them. For example, he argues that the cognitive mechanisms underwriting certain a priori judgments are similar in various respects to those that underwrite certain a posteriori judgments. Then he spends most of the rest of the paper arguing that these are not idiosyncratic features of his particular examples. But why is this supposed to be relevant?
Williamson writes:
The problem is obvious. As characterized above, the cognitive processes underlying Norman's clearly a priori knowledge of (1) and his clearly a posteriori knowledge of (2) are almost exactly similar. If so, how can there be a deep epistemological difference between them?But I do not find this problem at all obvious. The argument at least appears to have the structure of the terrible dogs and cats argument above. The thing to say about that argument is that identifying various similarities between two things does practically nothing to show that there aren't deep differences between them. There are deep biological distinctions between cats and dogs, but they're not ones that you can find by counting their legs or examining how humans interact with them. Similarly, Williamson offers nothing at all that I can see to rule out the possibility that there is a deep distinction between the a priori and a posteriori, but it is not one that is manifest in the cognitive mechanisms underwriting these judgments. For as Williamson himself later emphasizes, there's more to epistemology than cognitive mechanisms. If apriority lives in propositional justification—which is where I think it lives—then there's just no reason to expect it to show up at this psychological level. That doesn't mean it's not a deep distinction.
That Williamson's argument needs to be treated very carefully should also be evident from the fact that prima facie, it looks like it has enough teeth to show that the distinction between knowledge and false belief is not an epistemically deep one—a conclusion that everyone, but Williamson most of all, should reject. For the cognitive processes underlying cases of knowledge are often almost exactly similar to those underlying false beliefs. Should this tempt us to ask how, then, there could be a deep epistemological difference between them? I really don't see why.
Thursday, October 11, 2012
Where does apriority live?
Here are some things that can be violent:
- Neighborhoods
- People
- Actions
Violence inheres in these different kinds of things in different kinds of ways. A violent person is liable to punch you in the face if provoked; that the neighborhood will never punch you in the face doesn't count against its violence. Still, it's not like there's not a general category, violence, that applies in some sense to violent neighborhoods, violent people, and violent actions. These things are certainly connected somehow or other.
When you have this kind of set-up, you can sensibly ask which kind of entity is the best candidate for a more fundamental bearer of the property. To put it a bit colorfully: where does the violence live? Although I can imagine some people disagreeing, it seems to me pretty plausible that the violence of a neighborhood is explained by the violence of the people who populate it, rather than vice versa. Violence doesn't live in neighborhoods. And what makes a violent person? It seems to me that it has something to do with a propensity to perform violent actions. On this way of answering the question, violence ultimately lives in actions. But maybe not, maybe there's no real way to understand a violent action independently of the violent character traits that make a person violent. Maybe violence ultimately lives in people, or in character traits. I'd be curious to hear arguments about this interesting question. It's not my area.
But my area has some similarly interesting questions, too. Consider apriority. Here are some things that can be a priori:
- Knowledge
- Justification of beliefs
- Justification for beliefs
If you believe in apriority, it's worth spending a bit of time thinking about where the apriority lives.
Friday, September 28, 2012
New version of JPK; Thought Blog
Just a quick note to anyone who might be interested that I have posted a revised version of my paper, "Justification is Potential Knowledge," to my works in progress page. I expect to be submitting it in the next month or so, so if you have any comments or ideas, I'd very much welcome them.
Also, Thought now has a new blog devoted to discussion of articles that appear in that journal. So far, Brian Weatherson has written a thoughtful post about my paper, "Knowledge Norms and Acting Well." Check it out here.
Also, Thought now has a new blog devoted to discussion of articles that appear in that journal. So far, Brian Weatherson has written a thoughtful post about my paper, "Knowledge Norms and Acting Well." Check it out here.
Friday, September 21, 2012
Very Crazy Beliefs and JPK
Following up on yesterday's post, here is another kind of case that Jessica pressed me about last summer. Again, I'm defending:
The reason it is usually preferable to attribute linguistic confusion instead of very crazy beliefs is that to attribute the very crazy belief would be to attribute a radically unjustified one. So it might be a problem if my view ended up saying that such beliefs are justified.
Suppose that Emily is at the breakfast table, drinking orange juice, and expressing what appears to be the very crazy belief that she is drinking orangutans. For example, she says, in a tone of voice not at all suggestive of joking, “I try to drink orangutans every morning, because they are high in vitamin C.” Does JPK have the implication that her very crazy belief is justified? Here is a line of reasoning that suggests that it does -- thanks again to Jessica for articulating it to me last summer. There is nothing intrinsic to Emily that guarantees that her linguistic community does not use ‘orangutan’ to refer to orange juice. Consider her intrinsic duplicate, Emily*, in a world where everybody else treats the word ‘orangutan’ as referring to orange juice. Emily* speaks and believes truly, expressing the belief that she is drinking orange juice. If Emily*’s belief constitutes knowledge, then JPK has it that Emily’s is justified.
Notice that for a JPK theorist to avoid this implication, it is not enough to point to a possible version of the case in which Emily*’s belief falls short of knowledge—this would be easy. According to JPK, Emily’s belief is justified if any one of her possible intrinsic duplicates has knowledge. So to avoid the conclusion that very crazy beliefs like Emily’s are justified, the JPK theorist must argue that all of Emily’s possible intrinsic duplicates fail to know.
Let us consider further what Emily and Emily* must be like; further details of the case will be relevant. Part of what makes Emily’s belief so very crazy is that she is so out of touch with her linguistic community. On the most natural way of filling out the case, Emily sometimes encounters uses of the word ‘orangutan’ that strike her as very strange. Since she thinks that orangutans are a kind of fruit juice, she doesn’t really know what to make of the plaque that says ‘orangutans’ at the zoo, next some Asian primates. That sign seems to Emily to suggest that these apes are orange juice! Perhaps she thinks it is an error, or a prank. Or maybe it’s a sign not for the exhibit, but for some unseen orange juice nearby. On this most natural way of understanding the case, Emily has lots of evidence that her way of understanding ‘orangutan’ is wrong. This evidence impacts her internal state; all of her intrinsic duplicates also have evidence suggesting that ‘orangutan’ doesn’t mean what she thinks it does. There is, I suggest, every reason to consider this evidence to be a defeater to those duplicates’ knowledge. Even though Emily* expresses a truth when she says to herself, “orangutans are a kind of fruit juice,” she has lots of evidence that this is false. That evidence prevents Emily* from knowing; so JPK need not say that Emily’s very crazy belief is justified.
However, the analysis of the previous paragraph relied on a particular interpretation of the case. Although it is, I maintain, the most natural one, it is not the only possible one. What if we suppose that Emily has no internal evidence against the correctness of her bizarre use of ‘orangutan’? In this case, Emily* will have no defeater; might she therefore have knowledge? It depends, I think, on how each came to have their way of using the term. Suppose that, although ‘orangutan’ functions as it actually does in Emily’s linguistic community, she has never been taught this term. She spontaneously decided, for no particular reason, to use the term to refer to orange juice, and it’s just a coincidence that it happens to be the same word as that used in the wider community for orangutans. We can suppose that she’s encountered the word from time to time, but in impoverished contexts which provide no reason to suspect that her usage is incorrect. For example, she sometimes overhears people saying “I like orangutans”, without the additional context that would cue her into supposing this to be anything other than an expression of esteem for orange juice. (We include this limited contact to make it plausible that she really is using the public term.) In this case, Emily has formed beliefs about the meaning of a public term rather irresponsibly; this fact will be reflected in her intrinsic state. So Emily*, too, will have come irresponsibly to believe that “orangutan” means orange juice; even though her belief is true, it is not knowledge. So on this version of the case, too, the JPK theorist can avoid attributing justified belief to Emily.
What if, instead, we suppose that Emily thinks that orangutans are orange juice because of misleading evidence to the effect that ‘orangutan’ means orange juice? Her older brother, perhaps, decided it’d be funny to teach her that as a gullible child, and she’s never encountered evidence to the contrary. Now Emily’s belief looks like it might well have been formed responsibly. So there is no obvious obstacle to suggesting that Emily* has knowledge. For in this case, it looks like JPK will suggest that Emily’s belief, very crazy though its content is, is justified after all. This strikes me as the correct result; it is a familiar instance of a false belief that is justified on the proper basis of misleading evidence.
So it seems to me that JPK does not have problematically liberal implications about the justification of very crazy beliefs. For many of the most plausible version of very crazy beliefs, they will come along with intrinsic features that are inconsistent with knowledge; for those that do not, it is intuitively plausible to attribute justified belief.
JPK: S’s belief is justified iff S has a possible counterpart, alike to S in all relevant intrinsic respects, whose corresponding belief is knowledge.Sometimes the counterpart belief has a different content. As in the kind of case I discussed yesterday, Jessica is worried that this flexibility about content makes the view too liberal. We can put the worry this way: maybe my view has the implication that very crazy beliefs might end up justified, because they might be realized by intrinsic states that are consistent with knowledge in very different social environments. By a ‘very crazy’ belief, I mean a belief that is so implausible, it would usually be better to attribute to a subject who seemed to express it a deviant meaning. To take Burge’s famous example:
If a generally competent and reasonable speaker thinks that ‘orangutan’ applies to a fruit drink, we would be reluctant, and it would unquestionably be misleading, to take his words as revealing that he thinks he has been drinking orangutans for breakfast for the last few weeks. Such total misunderstanding often seems to block literalistic mental content attribution, at least in cases where we are not directly characterizing his mistake.
The reason it is usually preferable to attribute linguistic confusion instead of very crazy beliefs is that to attribute the very crazy belief would be to attribute a radically unjustified one. So it might be a problem if my view ended up saying that such beliefs are justified.
Suppose that Emily is at the breakfast table, drinking orange juice, and expressing what appears to be the very crazy belief that she is drinking orangutans. For example, she says, in a tone of voice not at all suggestive of joking, “I try to drink orangutans every morning, because they are high in vitamin C.” Does JPK have the implication that her very crazy belief is justified? Here is a line of reasoning that suggests that it does -- thanks again to Jessica for articulating it to me last summer. There is nothing intrinsic to Emily that guarantees that her linguistic community does not use ‘orangutan’ to refer to orange juice. Consider her intrinsic duplicate, Emily*, in a world where everybody else treats the word ‘orangutan’ as referring to orange juice. Emily* speaks and believes truly, expressing the belief that she is drinking orange juice. If Emily*’s belief constitutes knowledge, then JPK has it that Emily’s is justified.
Notice that for a JPK theorist to avoid this implication, it is not enough to point to a possible version of the case in which Emily*’s belief falls short of knowledge—this would be easy. According to JPK, Emily’s belief is justified if any one of her possible intrinsic duplicates has knowledge. So to avoid the conclusion that very crazy beliefs like Emily’s are justified, the JPK theorist must argue that all of Emily’s possible intrinsic duplicates fail to know.
Let us consider further what Emily and Emily* must be like; further details of the case will be relevant. Part of what makes Emily’s belief so very crazy is that she is so out of touch with her linguistic community. On the most natural way of filling out the case, Emily sometimes encounters uses of the word ‘orangutan’ that strike her as very strange. Since she thinks that orangutans are a kind of fruit juice, she doesn’t really know what to make of the plaque that says ‘orangutans’ at the zoo, next some Asian primates. That sign seems to Emily to suggest that these apes are orange juice! Perhaps she thinks it is an error, or a prank. Or maybe it’s a sign not for the exhibit, but for some unseen orange juice nearby. On this most natural way of understanding the case, Emily has lots of evidence that her way of understanding ‘orangutan’ is wrong. This evidence impacts her internal state; all of her intrinsic duplicates also have evidence suggesting that ‘orangutan’ doesn’t mean what she thinks it does. There is, I suggest, every reason to consider this evidence to be a defeater to those duplicates’ knowledge. Even though Emily* expresses a truth when she says to herself, “orangutans are a kind of fruit juice,” she has lots of evidence that this is false. That evidence prevents Emily* from knowing; so JPK need not say that Emily’s very crazy belief is justified.
However, the analysis of the previous paragraph relied on a particular interpretation of the case. Although it is, I maintain, the most natural one, it is not the only possible one. What if we suppose that Emily has no internal evidence against the correctness of her bizarre use of ‘orangutan’? In this case, Emily* will have no defeater; might she therefore have knowledge? It depends, I think, on how each came to have their way of using the term. Suppose that, although ‘orangutan’ functions as it actually does in Emily’s linguistic community, she has never been taught this term. She spontaneously decided, for no particular reason, to use the term to refer to orange juice, and it’s just a coincidence that it happens to be the same word as that used in the wider community for orangutans. We can suppose that she’s encountered the word from time to time, but in impoverished contexts which provide no reason to suspect that her usage is incorrect. For example, she sometimes overhears people saying “I like orangutans”, without the additional context that would cue her into supposing this to be anything other than an expression of esteem for orange juice. (We include this limited contact to make it plausible that she really is using the public term.) In this case, Emily has formed beliefs about the meaning of a public term rather irresponsibly; this fact will be reflected in her intrinsic state. So Emily*, too, will have come irresponsibly to believe that “orangutan” means orange juice; even though her belief is true, it is not knowledge. So on this version of the case, too, the JPK theorist can avoid attributing justified belief to Emily.
What if, instead, we suppose that Emily thinks that orangutans are orange juice because of misleading evidence to the effect that ‘orangutan’ means orange juice? Her older brother, perhaps, decided it’d be funny to teach her that as a gullible child, and she’s never encountered evidence to the contrary. Now Emily’s belief looks like it might well have been formed responsibly. So there is no obvious obstacle to suggesting that Emily* has knowledge. For in this case, it looks like JPK will suggest that Emily’s belief, very crazy though its content is, is justified after all. This strikes me as the correct result; it is a familiar instance of a false belief that is justified on the proper basis of misleading evidence.
So it seems to me that JPK does not have problematically liberal implications about the justification of very crazy beliefs. For many of the most plausible version of very crazy beliefs, they will come along with intrinsic features that are inconsistent with knowledge; for those that do not, it is intuitively plausible to attribute justified belief.
Thursday, September 20, 2012
Slow Switch, JPK, and Validity
I've been working for a while on a paper defending this knowledge-first approach to doxastic justification:
When I presented this paper in St Andrews last summer, Jessica Brown expressed the worry that the kind of externalism I was relying on made the view too liberal. She was worried that there'd be cases of intuitively unjustified beliefs, even though there were intrinsically identical counterparts who had knowledge. One possible family of such cases involves 'slow switch' subjects -- people who change in their external environments in a way that implies a change in content, without realizing it. Suppose someone commits the fallacy of equivocation because she has been slow-switched; won't JPK have the implication that her resultant belief is justified anyway? I think that JPK probably will have this implication. But I don't think it's intuitively the wrong one. Consider a case. This example is borrowed from Jessica's book.
Sally starts out a regular earth-person, who knows some stuff about aluminum, but not much detail about its chemical composition. She does know, however, that some of it is mined in Australia, so she says to herself:
(1) Aluminum is mined in Australia.
One night, unbeknownst to Sally, she is moved to Twin Earth, where there is no aluminum, but instead, some superficially similar stuff twaluminum. After a while there, Sally has thoughts about twaluminum. In particular, she comes to know that some of it is mined in the Soviet Union. This is the belief she expresses when she says to herself:
(2) Aluminum is mined in the Soviet Union.
She hasn't forgotten her previous knowledge, so she still knows (1). And she's unaware that "aluminum" in (1) refers to different stuff than does "aluminum" in (2). So she might well put these together and infer her way to a claim she'd express by:
(3) Something is mined in Australia and the Soviet Union.
Sally equivocates on "aluminum", because of her external environment. Intrinsically, she's just like her counterpart who stays on regular earth and comes to know that aluminum is mined in both places, and (validly) concludes that something is mined in both places. So JPK has it that Sally's conclusion, (3), expresses knowledge, even though it is the product of equivocation.
I think, however, that this is exactly the right result. It is intuitively plausible to suppose that Sally's appearances are misleading, but her belief is justified. This mean that invalid reasoning doesn't always interfere with the justification of beliefs. But that's what we should think about this case, independently of JPK.
I'd be interested to hear any thoughts from any readers, but especially answers to these:
(a) Do you agree that it is intuitive to describe Sally's conclusion (3) as justified?
(b) Do you see any other, more problematic implications for JPK that derive from slow-switch considerations?
JPK: S’s belief is justified iff S has a possible counterpart, alike to S in all relevant intrinsic respects, whose corresponding belief is knowledge.One of the moves I often use in that paper involves an exploitation of content externalism. My belief is justified if my intrinsically identical counterpart's is knowledge -- but his knowledge needn't have the same content as my justified belief.
When I presented this paper in St Andrews last summer, Jessica Brown expressed the worry that the kind of externalism I was relying on made the view too liberal. She was worried that there'd be cases of intuitively unjustified beliefs, even though there were intrinsically identical counterparts who had knowledge. One possible family of such cases involves 'slow switch' subjects -- people who change in their external environments in a way that implies a change in content, without realizing it. Suppose someone commits the fallacy of equivocation because she has been slow-switched; won't JPK have the implication that her resultant belief is justified anyway? I think that JPK probably will have this implication. But I don't think it's intuitively the wrong one. Consider a case. This example is borrowed from Jessica's book.
Sally starts out a regular earth-person, who knows some stuff about aluminum, but not much detail about its chemical composition. She does know, however, that some of it is mined in Australia, so she says to herself:
(1) Aluminum is mined in Australia.
One night, unbeknownst to Sally, she is moved to Twin Earth, where there is no aluminum, but instead, some superficially similar stuff twaluminum. After a while there, Sally has thoughts about twaluminum. In particular, she comes to know that some of it is mined in the Soviet Union. This is the belief she expresses when she says to herself:
(2) Aluminum is mined in the Soviet Union.
She hasn't forgotten her previous knowledge, so she still knows (1). And she's unaware that "aluminum" in (1) refers to different stuff than does "aluminum" in (2). So she might well put these together and infer her way to a claim she'd express by:
(3) Something is mined in Australia and the Soviet Union.
Sally equivocates on "aluminum", because of her external environment. Intrinsically, she's just like her counterpart who stays on regular earth and comes to know that aluminum is mined in both places, and (validly) concludes that something is mined in both places. So JPK has it that Sally's conclusion, (3), expresses knowledge, even though it is the product of equivocation.
I think, however, that this is exactly the right result. It is intuitively plausible to suppose that Sally's appearances are misleading, but her belief is justified. This mean that invalid reasoning doesn't always interfere with the justification of beliefs. But that's what we should think about this case, independently of JPK.
I'd be interested to hear any thoughts from any readers, but especially answers to these:
(a) Do you agree that it is intuitive to describe Sally's conclusion (3) as justified?
(b) Do you see any other, more problematic implications for JPK that derive from slow-switch considerations?
Wednesday, February 29, 2012
Goldberg on Gettier Cases and Internalism
Sanford Goldberg has an interesting new argument against mentalist internalism about justification in Analysis. I'm working on committing myself to an internalist approach to justification at the moment; Goldberg's new paper isn't enough to force me to reconsider.
The master argument of the paper, which Goldberg lays out quite succinctly, is this, which I quote:
I think internalists have two fairly natural lines of defence. First, one might reject the very notion of some property that turns true unGettiered belief into knowledge, at least if we read 'turns into' in some kind of truth-making sort of way. No doubt there is in some weak sense a property P such that one has knowledge if and only if one has true belief, has P, and is not in a Gettier situation, but I see no reason to suppose that it will be a property any more interesting or natural than the disjunction, knows or false or Gettiered. (I rather suspect "Gettiered" itself can be understood at best conjunctively.) And I don't think there's any interesting sense in which this disjunction turns unGettiered true belief into knowledge.
In defence of this way of setting the issue up, Goldberg writes:
But there are other fairly natural reasons to care about justification available. For example, justification may be that property which permits knowledge, without being one that guarantees it.
The second way an internalist might resist Goldberg's argument is to reject the considerations he brings to bear in favor of his P2. He imagines someone in an evil demon situation who is an intrinsic duplicate of someone with a justified belief. Take her perceptual belief that p. Her belief must be justified, by the internalist's lights, but is not knowledge, since she is in an evil demon scenario. It is not knowledge, even if it happens to be true. This doesn't support the argument unless we can also establish that this is not a Gettier case; at the moment it rather looks like one. (She has misleading evidence for p, and reasonably forms the belief that p on that basis; it turns out that p happens to be true.)
To close off this avenue, Goldberg asks us to suppose that it is probable that our subjects beliefs are true, due to the machinations of the demon.
But stipulating facts about luck is a dangerous game. There is of course some sense in which the not-so-evil demon victim isn't merely lucky to believe truly, but is it the one relevant to Gettier cases? Probably not. Nothing in Gettier's original cases precludes probability of true belief of this sort. Go back to Jones and the Ford and Brown in Barcelona; suppose Brown is in Barcelona 65% of the time, and Smith believes that Jones has a Ford or Brown is in Barcelona, as in the original case, solely on the basis of the misleading evidence about the Ford. This is still a paradigmatic Gettier situation, even though there may be some sense in which the belief is true not merely by luck. Given this parallel, I think the internalist has every reason to regard the subject of the not-so-evil demon as in a Gettier case. So there are good grounds for resisting Goldberg's argument.
The master argument of the paper, which Goldberg lays out quite succinctly, is this, which I quote:
P1. The property of being doxastically justified just is that property which turns true unGettiered belief into knowledge.
P2. No property that is internal in the Justification Internalist’s sense is the property which turns true unGettiered belief into knowledge.
Therefore
C. No property that is internal in the Justification Internalist’s sense is the property of being doxastically justified.
I think internalists have two fairly natural lines of defence. First, one might reject the very notion of some property that turns true unGettiered belief into knowledge, at least if we read 'turns into' in some kind of truth-making sort of way. No doubt there is in some weak sense a property P such that one has knowledge if and only if one has true belief, has P, and is not in a Gettier situation, but I see no reason to suppose that it will be a property any more interesting or natural than the disjunction, knows or false or Gettiered. (I rather suspect "Gettiered" itself can be understood at best conjunctively.) And I don't think there's any interesting sense in which this disjunction turns unGettiered true belief into knowledge.
In defence of this way of setting the issue up, Goldberg writes:
After all, ‘doxastic justification’ is a term of art, and so if we are to continue to use it, it must pick out something that is epistemically interesting. It picks out something epistemically interesting if P1 is true; but it is unclear whether it picks out something interesting if P1 is false. At a minimum, the burden of proof will be on those internalists who deny P1: if this is how they respond to the present argument, then we are owed an explanation of why we should care about the property of which the internalist is purporting to give us an account.
But there are other fairly natural reasons to care about justification available. For example, justification may be that property which permits knowledge, without being one that guarantees it.
The second way an internalist might resist Goldberg's argument is to reject the considerations he brings to bear in favor of his P2. He imagines someone in an evil demon situation who is an intrinsic duplicate of someone with a justified belief. Take her perceptual belief that p. Her belief must be justified, by the internalist's lights, but is not knowledge, since she is in an evil demon scenario. It is not knowledge, even if it happens to be true. This doesn't support the argument unless we can also establish that this is not a Gettier case; at the moment it rather looks like one. (She has misleading evidence for p, and reasonably forms the belief that p on that basis; it turns out that p happens to be true.)
To close off this avenue, Goldberg asks us to suppose that it is probable that our subjects beliefs are true, due to the machinations of the demon.
Still, it is easy to tell yet another variant of the Evil Demon case on which this move – to explain away the ‘no knowledge’ verdict by appeal to Gettierizing luck – is not plausible in the least. Imagine the following scenario, involving the Not-so-Evil Demon: it is just like the ordinary Evil Demon scenario except the Not-so-Evil Demon has conspired to make 65% of your Doppelgänger’s beliefs true (the other 35% being false owing to systematic illusions sustained by Not-so-Evil). Imagine your Doppelgänger in this world. For any perceptual belief (s)he has, there is a 65% chance that the belief is true. If it’s true, this is not merely lucky.
But stipulating facts about luck is a dangerous game. There is of course some sense in which the not-so-evil demon victim isn't merely lucky to believe truly, but is it the one relevant to Gettier cases? Probably not. Nothing in Gettier's original cases precludes probability of true belief of this sort. Go back to Jones and the Ford and Brown in Barcelona; suppose Brown is in Barcelona 65% of the time, and Smith believes that Jones has a Ford or Brown is in Barcelona, as in the original case, solely on the basis of the misleading evidence about the Ford. This is still a paradigmatic Gettier situation, even though there may be some sense in which the belief is true not merely by luck. Given this parallel, I think the internalist has every reason to regard the subject of the not-so-evil demon as in a Gettier case. So there are good grounds for resisting Goldberg's argument.
Saturday, July 23, 2011
Fitting the Evidence
I've never been at all sure what to make of 'evidentialism' in epistemology. Following is a fairly naive response to Conee and Feldman; I suspect there's some discussion of these or closely related issues; I'd be happy to be pointed to them.
Conee and Feldman think that the doxastic attitude I'm justified in having toward any given proposition is the one that fits my evidence. However, it's just not at all clear what that's supposed to mean. They offer examples, by way of illustration:
My problem here isn't that anything strikes me as false -- it's just that I don't see that justification has been illuminated by the connection to 'fitting the evidence'. I don't feel like I have a better antecedent grip on what the evidence is, and how to tell what fits it, than I do on what is justified. Conee and Feldman go on to observe that various views about justification are inconsistent with evidentialism, because, e.g., they have the implication that only a responsibly formed belief is justified, but some beliefs that are not responsibly formed fit the evidence. One needn't think this, though; perhaps what fits the evidence is what one would do if responsible. Or, certain reliabilist views will have the implication that Bonjour's clairvoyant character has justified beliefs; this too can be rendered consistent with the letter of evidentialism by allowing that external facts about reliability play a role in what evidence one has (or, less plausibly, which attitude fits a given body of evidence). A commitment to evidentialism per se doesn't seem to tell you much.
A theory of justification, it seems, ought to be illuminating, in the sense that it should explain justification in terms of states and relations that are antecedently well-understood. (As indicated last post, however, I don't think this constraint implies that the stuff on the right-hand-side need always be non-epistemic.)
Conee and Feldman think that the doxastic attitude I'm justified in having toward any given proposition is the one that fits my evidence. However, it's just not at all clear what that's supposed to mean. They offer examples, by way of illustration:
Here are three examples that illustrate the application of this notion of justification. First, when a physiologically normal person under ordinary circumstances looks at a plush green lawn that is directly in front of him in broad daylight, believing that there is something green before him is the attitude toward this proposition that fits his evidence. That is why the belief is epistemically justified. Second, suspension of judgment is the fitting attitude for each of us toward the proposition that an even number of ducks exists, since our evidence makes it equally likely that the number is odd. Neither belief nor disbelief is epistemically justified when our evidence is equally balanced. And third, when it comes to the proposition that sugar is sour, our gustatory experience makes disbelief the fitting attitude. Such experiential evidence epistemically justifies disbelief.
My problem here isn't that anything strikes me as false -- it's just that I don't see that justification has been illuminated by the connection to 'fitting the evidence'. I don't feel like I have a better antecedent grip on what the evidence is, and how to tell what fits it, than I do on what is justified. Conee and Feldman go on to observe that various views about justification are inconsistent with evidentialism, because, e.g., they have the implication that only a responsibly formed belief is justified, but some beliefs that are not responsibly formed fit the evidence. One needn't think this, though; perhaps what fits the evidence is what one would do if responsible. Or, certain reliabilist views will have the implication that Bonjour's clairvoyant character has justified beliefs; this too can be rendered consistent with the letter of evidentialism by allowing that external facts about reliability play a role in what evidence one has (or, less plausibly, which attitude fits a given body of evidence). A commitment to evidentialism per se doesn't seem to tell you much.
A theory of justification, it seems, ought to be illuminating, in the sense that it should explain justification in terms of states and relations that are antecedently well-understood. (As indicated last post, however, I don't think this constraint implies that the stuff on the right-hand-side need always be non-epistemic.)
Friday, July 15, 2011
Naturalistic Reduction of Justification
I'm starting work on a new project on epistemic justification. I'm trying to begin by laying out various perceived or actual desiderata for theories of epistemic justification. Here's one, laid out in Alvin Goldman's classic paper, "What is Justified Belief?": a theory of justification should give necessary and sufficient conditions in non-epistemic terms. We could call this a "naturalistic reduction" constraint. Goldman writes:
I am not sure I feel the motivation for this constraint. I can certainly see why we might not be satisfied by a theory of justification that is circular (justification is justification) or otherwise uninformative (justified belief is belief that is epistemically good), but barring all epistemic notions from the right-hand-side seems like a pretty strong constraint. But perhaps I've misunderstood Goldman's motivation here? Is the naturalistic reduction constraint motivated by something other than informativeness?
The term 'justified', I presume, is an evaluative term, a term of appraisal. Any correct definition or synonym of it would also feature evaluative terms. I assume that such definitions or synonyms might be given, but I am not interested in them. I want a set of substantive conditions that specify when a belief is justified. Compare the moral term 'right'. This might be defined in other ethical terms or phrases, a task appropriate to metaethics. The task of normative ethics, by contrast, is to state substantive conditions for the rightness of actions. Normative ethics tries to specify non-ethical conditions that determine when an action is right. A familiar example is act-utilitarianism, which says an action is right if and only if it produces, or would produce, at least as much net happiness as any alternative open to the agent. These necessary and sufficient conditions clearly involve no ethical notions. Analogously, I want a theory of justified belief to specify in non-epistemic terms when a belief is justified. This is not the only kind of theory of justifiedness one might seek, but it is one important kind of theory and the kind sought here.
I am not sure I feel the motivation for this constraint. I can certainly see why we might not be satisfied by a theory of justification that is circular (justification is justification) or otherwise uninformative (justified belief is belief that is epistemically good), but barring all epistemic notions from the right-hand-side seems like a pretty strong constraint. But perhaps I've misunderstood Goldman's motivation here? Is the naturalistic reduction constraint motivated by something other than informativeness?
Wednesday, January 27, 2010
Justification and Action
Fantl and McGrath argue that the combination of the following two views is problematic:
Fantl and McGrath argue that (JJ) implies that 'purist fallibilism' about justification cannot be true. Now as I wrote a little while ago, I don't really buy into the notion of purism. And to be honest, I have some problems with the notion of fallibilism, too -- I'll try to write them up sometime soon. But set all that aside. The basic idea is that, if you accept (JJ), then you think that there might be two subjects that differ only in, for example, how important p is to each subject, such that one is justified in believing that p and the other is not. I guess I think that's right, although I'm thinking of things in a way different from the way Fantl and McGrath do.
Fantl and McGrath think that people who accept this and are also moderate externalists (hereafter 'externalists') "commit themselves to counterintuitive claims about action." First, Fantl and McGrath observe the familiar point that externalists think there could be intrinsic duplicates who differ in their justification facts; externalists, therefore they face the New Evil Demon problem. That's familiar stuff, and, as Fantl and McGrath say, there are many possible responses. But they think things get worse once you also accept (JJ). They write:
It's not really clear to me that this is a counterintuitive verdict. But more to the point, I just don't see why Fantl and McGrath think externalists who accept JJ are thereby committed to it. They don't, as far as I can see, explain why they think this result should obtain. It plainly doesn't follow in any direct way from externalism and JJ; externalism says that external properties can influence belief-justification facts, and JJ gives one link between belief-justification and action-justification, but it's just nowhere near strong enough to imply, as Fantl and McGrath seem to think it implies, that external properties can shift action-justification facts.
Take subject LOW who is justified in believing p, and for whom p justifies Xing. Externalists are committed to the possibility, in at least some cases, of another subject, HIGH, intrinsically identical to LOW, who is not justified in believing p. Fantl and McGrath seem to think that externalists are committed by (JJ) to think it possible, consistent with these stipulations, that HIGH is not justified in Xing, but (JJ) just doesn't get them anywhere near that commitment. Indeed, (JJ) is silent about HIGH. This principle tells you about what happens when a subject is justified in believing p; it entails nothing about what happens when a subject is not justified in believing p. For all (JJ) says, p may justify HIGH in Xing, too. (Consider this coherent principle that entails (JJ): If anyone intrinsically identical to you is justified in believing p, then p is warranted enough to justify you in phi-ing, for any phi.)
Suppose we considered a stronger, biconditional, principle:
I don't know whether (JJ*) is plausible or not; it's strictly stronger than the principle Fantl and McGrath defend. It gets around the problem I just raised for their charge against the externalist. But even (JJ*) isn't strong enough to deliver an entailment from externalism to a difference in what actions are justified between LOW and HIGH. Externalism and (JJ*) commit one to the verdict that p cannot justify HIGH in Xing, even though it can justify LOW in Xing. That's a far cry from the stated claim that nothing justifies HIGH in Xing. And I just don't see any plausible argument that this could be the case. It may be, for all (JJ*) says, that HIGH and LOW must be justified in performing all the same actions, but that they have divergent propositions justifying those same actions. (The plausible way to develop this line, I think, is that HIGH's reasons are a proper subset of LOW's.)
So I don't think that externalists who like (JJ), or even those who accept (JJ*), are committed to the allegedly counterintuitive claims about action that Fantl and McGrath charge.
(JJ) If you are justified in believing that p, then p is warranted enough to justify you in phi-ing, for any phi. (Quoted from p. 99)
(Moderate Externalism about Justification) Justification does not supervene on the subject's internal states. In particular, external properties like reliability and Gettierizedness can make a difference in whether one is justified in a particular belief. (Paraphrased from p. 107)
Fantl and McGrath argue that (JJ) implies that 'purist fallibilism' about justification cannot be true. Now as I wrote a little while ago, I don't really buy into the notion of purism. And to be honest, I have some problems with the notion of fallibilism, too -- I'll try to write them up sometime soon. But set all that aside. The basic idea is that, if you accept (JJ), then you think that there might be two subjects that differ only in, for example, how important p is to each subject, such that one is justified in believing that p and the other is not. I guess I think that's right, although I'm thinking of things in a way different from the way Fantl and McGrath do.
Fantl and McGrath think that people who accept this and are also moderate externalists (hereafter 'externalists') "commit themselves to counterintuitive claims about action." First, Fantl and McGrath observe the familiar point that externalists think there could be intrinsic duplicates who differ in their justification facts; externalists, therefore they face the New Evil Demon problem. That's familiar stuff, and, as Fantl and McGrath say, there are many possible responses. But they think things get worse once you also accept (JJ). They write:
Moderate externalists who accept JJ not only have to say that two subjects who differ only in how reliable they are can differ in what they are justified in believing. They also have to say that the subjects can differ in what they are justified in doing. This is counterintuitive. (108)
It's not really clear to me that this is a counterintuitive verdict. But more to the point, I just don't see why Fantl and McGrath think externalists who accept JJ are thereby committed to it. They don't, as far as I can see, explain why they think this result should obtain. It plainly doesn't follow in any direct way from externalism and JJ; externalism says that external properties can influence belief-justification facts, and JJ gives one link between belief-justification and action-justification, but it's just nowhere near strong enough to imply, as Fantl and McGrath seem to think it implies, that external properties can shift action-justification facts.
Take subject LOW who is justified in believing p, and for whom p justifies Xing. Externalists are committed to the possibility, in at least some cases, of another subject, HIGH, intrinsically identical to LOW, who is not justified in believing p. Fantl and McGrath seem to think that externalists are committed by (JJ) to think it possible, consistent with these stipulations, that HIGH is not justified in Xing, but (JJ) just doesn't get them anywhere near that commitment. Indeed, (JJ) is silent about HIGH. This principle tells you about what happens when a subject is justified in believing p; it entails nothing about what happens when a subject is not justified in believing p. For all (JJ) says, p may justify HIGH in Xing, too. (Consider this coherent principle that entails (JJ): If anyone intrinsically identical to you is justified in believing p, then p is warranted enough to justify you in phi-ing, for any phi.)
Suppose we considered a stronger, biconditional, principle:
(JJ*) If and only if you are justified in believing that p, then p is warranted enough to justify you in phi-ing, for any phi.
I don't know whether (JJ*) is plausible or not; it's strictly stronger than the principle Fantl and McGrath defend. It gets around the problem I just raised for their charge against the externalist. But even (JJ*) isn't strong enough to deliver an entailment from externalism to a difference in what actions are justified between LOW and HIGH. Externalism and (JJ*) commit one to the verdict that p cannot justify HIGH in Xing, even though it can justify LOW in Xing. That's a far cry from the stated claim that nothing justifies HIGH in Xing. And I just don't see any plausible argument that this could be the case. It may be, for all (JJ*) says, that HIGH and LOW must be justified in performing all the same actions, but that they have divergent propositions justifying those same actions. (The plausible way to develop this line, I think, is that HIGH's reasons are a proper subset of LOW's.)
So I don't think that externalists who like (JJ), or even those who accept (JJ*), are committed to the allegedly counterintuitive claims about action that Fantl and McGrath charge.
Subscribe to:
Posts (Atom)