I have completed a draft of a new short discussion piece on Michael Blome-Tillmann's (2009) Mind paper, "Knowledge and Presuppositions". It is essentially a development of this blog post from a year and a half ago. (I'd forgotten about it, to be honest -- I rediscovered it as I finished drafting.)
My new paper: Ignorance and Presuppositions
I hope to submit it soon; any comments would be very welcome.
Tuesday, October 18, 2011
Saturday, July 23, 2011
Fitting the Evidence
I've never been at all sure what to make of 'evidentialism' in epistemology. Following is a fairly naive response to Conee and Feldman; I suspect there's some discussion of these or closely related issues; I'd be happy to be pointed to them.
Conee and Feldman think that the doxastic attitude I'm justified in having toward any given proposition is the one that fits my evidence. However, it's just not at all clear what that's supposed to mean. They offer examples, by way of illustration:
My problem here isn't that anything strikes me as false -- it's just that I don't see that justification has been illuminated by the connection to 'fitting the evidence'. I don't feel like I have a better antecedent grip on what the evidence is, and how to tell what fits it, than I do on what is justified. Conee and Feldman go on to observe that various views about justification are inconsistent with evidentialism, because, e.g., they have the implication that only a responsibly formed belief is justified, but some beliefs that are not responsibly formed fit the evidence. One needn't think this, though; perhaps what fits the evidence is what one would do if responsible. Or, certain reliabilist views will have the implication that Bonjour's clairvoyant character has justified beliefs; this too can be rendered consistent with the letter of evidentialism by allowing that external facts about reliability play a role in what evidence one has (or, less plausibly, which attitude fits a given body of evidence). A commitment to evidentialism per se doesn't seem to tell you much.
A theory of justification, it seems, ought to be illuminating, in the sense that it should explain justification in terms of states and relations that are antecedently well-understood. (As indicated last post, however, I don't think this constraint implies that the stuff on the right-hand-side need always be non-epistemic.)
Conee and Feldman think that the doxastic attitude I'm justified in having toward any given proposition is the one that fits my evidence. However, it's just not at all clear what that's supposed to mean. They offer examples, by way of illustration:
Here are three examples that illustrate the application of this notion of justification. First, when a physiologically normal person under ordinary circumstances looks at a plush green lawn that is directly in front of him in broad daylight, believing that there is something green before him is the attitude toward this proposition that fits his evidence. That is why the belief is epistemically justified. Second, suspension of judgment is the fitting attitude for each of us toward the proposition that an even number of ducks exists, since our evidence makes it equally likely that the number is odd. Neither belief nor disbelief is epistemically justified when our evidence is equally balanced. And third, when it comes to the proposition that sugar is sour, our gustatory experience makes disbelief the fitting attitude. Such experiential evidence epistemically justifies disbelief.
My problem here isn't that anything strikes me as false -- it's just that I don't see that justification has been illuminated by the connection to 'fitting the evidence'. I don't feel like I have a better antecedent grip on what the evidence is, and how to tell what fits it, than I do on what is justified. Conee and Feldman go on to observe that various views about justification are inconsistent with evidentialism, because, e.g., they have the implication that only a responsibly formed belief is justified, but some beliefs that are not responsibly formed fit the evidence. One needn't think this, though; perhaps what fits the evidence is what one would do if responsible. Or, certain reliabilist views will have the implication that Bonjour's clairvoyant character has justified beliefs; this too can be rendered consistent with the letter of evidentialism by allowing that external facts about reliability play a role in what evidence one has (or, less plausibly, which attitude fits a given body of evidence). A commitment to evidentialism per se doesn't seem to tell you much.
A theory of justification, it seems, ought to be illuminating, in the sense that it should explain justification in terms of states and relations that are antecedently well-understood. (As indicated last post, however, I don't think this constraint implies that the stuff on the right-hand-side need always be non-epistemic.)
Friday, July 15, 2011
Naturalistic Reduction of Justification
I'm starting work on a new project on epistemic justification. I'm trying to begin by laying out various perceived or actual desiderata for theories of epistemic justification. Here's one, laid out in Alvin Goldman's classic paper, "What is Justified Belief?": a theory of justification should give necessary and sufficient conditions in non-epistemic terms. We could call this a "naturalistic reduction" constraint. Goldman writes:
I am not sure I feel the motivation for this constraint. I can certainly see why we might not be satisfied by a theory of justification that is circular (justification is justification) or otherwise uninformative (justified belief is belief that is epistemically good), but barring all epistemic notions from the right-hand-side seems like a pretty strong constraint. But perhaps I've misunderstood Goldman's motivation here? Is the naturalistic reduction constraint motivated by something other than informativeness?
The term 'justified', I presume, is an evaluative term, a term of appraisal. Any correct definition or synonym of it would also feature evaluative terms. I assume that such definitions or synonyms might be given, but I am not interested in them. I want a set of substantive conditions that specify when a belief is justified. Compare the moral term 'right'. This might be defined in other ethical terms or phrases, a task appropriate to metaethics. The task of normative ethics, by contrast, is to state substantive conditions for the rightness of actions. Normative ethics tries to specify non-ethical conditions that determine when an action is right. A familiar example is act-utilitarianism, which says an action is right if and only if it produces, or would produce, at least as much net happiness as any alternative open to the agent. These necessary and sufficient conditions clearly involve no ethical notions. Analogously, I want a theory of justified belief to specify in non-epistemic terms when a belief is justified. This is not the only kind of theory of justifiedness one might seek, but it is one important kind of theory and the kind sought here.
I am not sure I feel the motivation for this constraint. I can certainly see why we might not be satisfied by a theory of justification that is circular (justification is justification) or otherwise uninformative (justified belief is belief that is epistemically good), but barring all epistemic notions from the right-hand-side seems like a pretty strong constraint. But perhaps I've misunderstood Goldman's motivation here? Is the naturalistic reduction constraint motivated by something other than informativeness?
Thursday, July 14, 2011
The Rules of Thought
Benjamin Jarvis and I have been working for some time now on a book manuscript on mental content, rationality, and the epistemology of philosophy. I posted a TOC of our first draft last summer. Since then, we've received some helpful comments from reviewers, and have revised extensively; we now have a full new draft, which we feel ready to share with the public. If you're interested, you can download the large (2.3 MB, 331 page) pdf here. Comments and suggestions are extremely welcome.
I'm including a table of contents of the new draft in this post, to better give an idea of what we're up to.
I'm including a table of contents of the new draft in this post, to better give an idea of what we're up to.
Friday, May 13, 2011
Rationality, Morality, and Intuition
Suppose that Katie is sitting out in the sun. Here are two propositions:
(1) It is sunny.
(2) Jonathan is wearing glasses or Jonathan is not wearing glasses.
It's pretty plausible to develop the case in such a way that each of (1) and (2) would be rational for Katie to believe, and irrational for her to disbelieve. Why is it rational for Katie to believe (1), and irrational for her to disbelieve it? Because of various experiences she is having, like the way the sky looks, and the way her skin feels. (Obviously.) Why is it rational for her to believe (2), and irrational for her to disbelieve it? Now that's a more interesting question. (Under some circumstances, Katie might be rational in accepting (2) in part because of her perceptual experience -- for instance, if she can see that I am wearing glasses. We stipulate that she doesn't know, or have any reason to believe, that I am or am not.) One answer that seems to be reasonably widely held is that, in just the same way that the rationality of (1) is explained by Katie's perceptual experience, the rationality of (2) is explained by her intuitive experience. I think that this is a very bad answer, and in this post, I'll press an analogy that I hope will make you think this answer very bad too.
If the rationality of (2) depends on Katie's intuitions, then, if she lacked the relevant intuitions, she would no longer suffer rational pressure to accept (2). But that's crazy. Imagine Katie's stupid counterpart, Dummy, who does not have any intuitions about (2). It's rational for Dummy to accept (2), and irrational for her to reject it, just like it is for Katie. The difference between Katie and Dummy is, Katie's intuitions help her to see what she has reason to accept. Dummy is blind to her rational obligations. Dummy doesn't escape rational obligations just by lacking intuitions. We can take it a step further, and imagine yet another counterpart, Crazy, who has the intuition that (2) is false, or even necessarily false. Would it be rational for Crazy to deny (2)? Definitely not. The rational thing for Crazy to do would be to reject her crazy intuition and accept (2). So the fact that (2) is rational for Katie does not depend on her intuitions.
This point is very obvious in the moral domain.
Dick has promised his shy friend to speak on his behalf to the woman he loves, but breaks the promise, deciding instead to woo the woman in question for himself. Our confident judgment that Dick acts immorally does not depend in any way on our assessment of his moral sensibilities. Dick may be a moral imbecile, who lacks sensitivity, even at the intuitive level, to his moral requirements. His failure to intuit in accordance with his duties to his friend constitute a moral shortcoming, and they do not by any means exempt him from said duties. Dick may even have had the intuition that betraying his friend was the correct action; still, that don’t make it right!
Nobody thinks that Dick escapes his moral obligations by failing to have the relevant intuitions, or even by having contrary ones. So nobody should think that of Katie, either.
(1) It is sunny.
(2) Jonathan is wearing glasses or Jonathan is not wearing glasses.
It's pretty plausible to develop the case in such a way that each of (1) and (2) would be rational for Katie to believe, and irrational for her to disbelieve. Why is it rational for Katie to believe (1), and irrational for her to disbelieve it? Because of various experiences she is having, like the way the sky looks, and the way her skin feels. (Obviously.) Why is it rational for her to believe (2), and irrational for her to disbelieve it? Now that's a more interesting question. (Under some circumstances, Katie might be rational in accepting (2) in part because of her perceptual experience -- for instance, if she can see that I am wearing glasses. We stipulate that she doesn't know, or have any reason to believe, that I am or am not.) One answer that seems to be reasonably widely held is that, in just the same way that the rationality of (1) is explained by Katie's perceptual experience, the rationality of (2) is explained by her intuitive experience. I think that this is a very bad answer, and in this post, I'll press an analogy that I hope will make you think this answer very bad too.
If the rationality of (2) depends on Katie's intuitions, then, if she lacked the relevant intuitions, she would no longer suffer rational pressure to accept (2). But that's crazy. Imagine Katie's stupid counterpart, Dummy, who does not have any intuitions about (2). It's rational for Dummy to accept (2), and irrational for her to reject it, just like it is for Katie. The difference between Katie and Dummy is, Katie's intuitions help her to see what she has reason to accept. Dummy is blind to her rational obligations. Dummy doesn't escape rational obligations just by lacking intuitions. We can take it a step further, and imagine yet another counterpart, Crazy, who has the intuition that (2) is false, or even necessarily false. Would it be rational for Crazy to deny (2)? Definitely not. The rational thing for Crazy to do would be to reject her crazy intuition and accept (2). So the fact that (2) is rational for Katie does not depend on her intuitions.
This point is very obvious in the moral domain.
Dick has promised his shy friend to speak on his behalf to the woman he loves, but breaks the promise, deciding instead to woo the woman in question for himself. Our confident judgment that Dick acts immorally does not depend in any way on our assessment of his moral sensibilities. Dick may be a moral imbecile, who lacks sensitivity, even at the intuitive level, to his moral requirements. His failure to intuit in accordance with his duties to his friend constitute a moral shortcoming, and they do not by any means exempt him from said duties. Dick may even have had the intuition that betraying his friend was the correct action; still, that don’t make it right!
Nobody thinks that Dick escapes his moral obligations by failing to have the relevant intuitions, or even by having contrary ones. So nobody should think that of Katie, either.
Wednesday, May 11, 2011
Scorekeeping in a Football Game
According to David Lewis's classic paper, "Scorekeeping in a Language Game," conversations, like sporting matches, have scores, which characterize the current situation, and rules, which interact with scores to determine what is permissible. The score of a baseball game includes the number of runs scored, an indication of which team is batting, the number of outs, balls, strikes, etc. (Lewis characterizes baseball scores as ordered septuples; in fact, they're more complicated. Lewis's baseball scores leave out, e.g., the batting order, which pitchers have already appeared in the game, and perhaps most egregiously, who is on which bases.) An example of a baseball rule, in Lewis's sense, is that if a the score includes three balls, if the pitcher throws a ball, the score is updated by resetting the count, putting the batter on first base, updating other runners and the run total as necessary, as making the next member of the lineup the batter. This is a rule that tells you what happens to the score when a particular event occurs; there are also rules that tell you what is permissible, given the score. You may not come up to bat if you're in the lineup and it is not your turn.
In a language games, scores will include contextual parameters like who is speaking, what is presupposed, what is salient, etc. There are rules that tell you what is permissible, given the score, and there are rules that govern the updating of scores. These sometimes interact, as when the score is accommodated to permit a conversational move. For example, there's a rule that say I'm only allowed to use the definite article "the cat" when there is a uniquely salient cat. But if there's not one that was already salient -- if the score didn't already indicate a uniquely salient cat -- my utterance can cause an updating of the score, to make it permissible. If I say "I'd better go home because the cat is hungry," the score is updated to make my cat at home the uniquely salient one.
This feature of conversational games, Lewis says, marks a difference between conversations and sporting events.
I'm not sure Lewis is right about this. Of course he's right that you don't get a walk just by trotting along to first base, but I'm not sure that's because there's no accommodation in play. What, plausibly, would happen in a Major League game where a batter tossed his bat aside and jogged to first base after ball three? The umpire would call him back. That's a baseball move too; that's what the umpire is supposed to do, and it's surely what he would do. And there's plausibly a baseball rule that says that when the umpire says you're still at bat and have three balls, the score is updated to make that the case. If the umpire stood idly by and let the batter take first base, I think that might well make it the case that he got a walk. That's part of why bad calls suck so much; they make themselves true. After this play, there were only two outs in the inning, even though, had the umpire performed correctly, there would have been three. (To deny this would be to say that there were four outs in that inning -- or that Melky Cabrera's subsequent apparent plate appearance was illusory, and that his turn was skipped in the lineup.)
This happened pretty dramatically in an infamous college football game between Colorado and Missouri. The football score, in Lewis's sense, will include what down it is. And failure to convert on fourth down means you lose possession. But in this game, the officials miscounted the downs, and nobody noticed until afterward, when Colorado scored a touchdown on 'fifth down', which had been described by the officials as fourth down. The officials got it wrong, obviously. But, I think, they didn't get it wrong in the sense of saying something false; they got it wrong by making the wrong thing the case. It really was fourth down, and there really was a touchdown.
So I think, contra Lewis, that football scores and baseball scores can accommodate, in more or less the same way that conversational scores can. (There's no doubt it's easier to do in the case of conversational scores.)
In a language games, scores will include contextual parameters like who is speaking, what is presupposed, what is salient, etc. There are rules that tell you what is permissible, given the score, and there are rules that govern the updating of scores. These sometimes interact, as when the score is accommodated to permit a conversational move. For example, there's a rule that say I'm only allowed to use the definite article "the cat" when there is a uniquely salient cat. But if there's not one that was already salient -- if the score didn't already indicate a uniquely salient cat -- my utterance can cause an updating of the score, to make it permissible. If I say "I'd better go home because the cat is hungry," the score is updated to make my cat at home the uniquely salient one.
This feature of conversational games, Lewis says, marks a difference between conversations and sporting events.
There is one big difference between baseball score and conversational score. Suppose the batter walks to first base after only three balls. His behavior would be correct play if there were four balls rather than three. That’s just too bad - his behavior does not at all make it the case that there were four balls and his behavior is correct. Baseball has no rule of accommodation to the effect that if a fourth ball is required to make correct the play that occurs, then that very fact suffices to change the score so that straightway there are four balls.
I'm not sure Lewis is right about this. Of course he's right that you don't get a walk just by trotting along to first base, but I'm not sure that's because there's no accommodation in play. What, plausibly, would happen in a Major League game where a batter tossed his bat aside and jogged to first base after ball three? The umpire would call him back. That's a baseball move too; that's what the umpire is supposed to do, and it's surely what he would do. And there's plausibly a baseball rule that says that when the umpire says you're still at bat and have three balls, the score is updated to make that the case. If the umpire stood idly by and let the batter take first base, I think that might well make it the case that he got a walk. That's part of why bad calls suck so much; they make themselves true. After this play, there were only two outs in the inning, even though, had the umpire performed correctly, there would have been three. (To deny this would be to say that there were four outs in that inning -- or that Melky Cabrera's subsequent apparent plate appearance was illusory, and that his turn was skipped in the lineup.)
This happened pretty dramatically in an infamous college football game between Colorado and Missouri. The football score, in Lewis's sense, will include what down it is. And failure to convert on fourth down means you lose possession. But in this game, the officials miscounted the downs, and nobody noticed until afterward, when Colorado scored a touchdown on 'fifth down', which had been described by the officials as fourth down. The officials got it wrong, obviously. But, I think, they didn't get it wrong in the sense of saying something false; they got it wrong by making the wrong thing the case. It really was fourth down, and there really was a touchdown.
So I think, contra Lewis, that football scores and baseball scores can accommodate, in more or less the same way that conversational scores can. (There's no doubt it's easier to do in the case of conversational scores.)
Monday, April 25, 2011
Brian Talbot on intuitions in philosophy
I spent the last week at the APA Pacific in San Diego. I have several topics inspired there that I'm hoping to write up quick blog posts about, including some philosophical and nonphilosophical ones. In general, I think I'm going to start using this blog for a bit more extraphilosophy content. I'll start that not-right-now, though, because first I want to write up a reaction Brian Talbot's talk, An Argument for Old-Fashioned Intuition Pumping (pdf link).
Brian was defending the traditional philosophical project of investigation into extra-mentalist subject matters, and arguing that the best way to do this involves heavy reliance on intuitions. His main focus was on the appropriate conditions for measuring such intuitions, but my main point of departure comes earlier, in the suggestion that traditional armchair philosophy must or should rely on intuitions in any interesting sense. Brian makes a stark contrast between intuitions and what he calls 'reasoned-to judgments'. Anything reasoned to is, Brian says, no intuition. I disagree, but let's allow the stipulation. The question is whether we have any special reason to care about intuitions in Brian's sense. Brian says we do: his argument is roughly this: a reasoned-to judgment that p is not itself evidence for p; rather, it reflects the evidence upon which the reasoning is based. So we should, when investigating the evidence for p, look to the evidence on which any reasoning is based; in the relevant cases, this must be intuition.
From this methodological stance, Brian makes some fairly sweeping claims about philosophical methodology and experimental philosophy, emphasizing the need to study intuitions directly, isolating them from any influence by reasoning. This, to my mind, is a rather bizarre idea. Good reasoning, in my view, is at the center of good philosophy. So I'm pretty suspicious of any approach to methodology that wants to marginalize reasoning.
In the Q&A, I raised something like this point. I pointed out that, at least so far as Brian had said, it was open for the defender of traditional philosophical methods to deny that intuitions play the important starting-point role that Brian articulated; perhaps reasoning is ultimately where the action is. Brian's response was effectively that reasoning must have starting points, and those starting points are intuitions. But reasoning, in general, need not have starting points; sometimes, good reasoning can proceed from the null set of premises. Another audience member raised the apt example of a reductio.
Brian's response to this was effectively to allow that there might be some philosophical knowledge achievable in this way, but that the strategy would extend only to tautologies. Insofar, then, as philosophers are interested in establishing more than just tautologies, one will need intuitions as starting points. Someone following my strategy, Brian said, will not count as engaging in the traditional project he intends of substantive investigation into extramentalist subject matters.
Now I don't know what exactly Brian means by 'tautology', but it seems to me that there are two ways one can go, either of which looks fine. If tautologies are limited to, e.g., obvious logical truths, then there is no reason to accept that good reasoning, without intuitions, can yield only tautologies. For good reasoning need not be limited to logical reasoning. I think that one can reason, for instance, from 'S knows that p' to 'p'; this kind of reasoning can underwrite the knowledge, from no premises, that knowledge is factive. And I don't see why this couldn't extend to all of that philosophy which is plausibly a priori. If, on the other hand, Talbot wants to call claims like these tautologies, then it'll just turn out that philosophers sometimes discover interesting tautologies.
Brian was defending the traditional philosophical project of investigation into extra-mentalist subject matters, and arguing that the best way to do this involves heavy reliance on intuitions. His main focus was on the appropriate conditions for measuring such intuitions, but my main point of departure comes earlier, in the suggestion that traditional armchair philosophy must or should rely on intuitions in any interesting sense. Brian makes a stark contrast between intuitions and what he calls 'reasoned-to judgments'. Anything reasoned to is, Brian says, no intuition. I disagree, but let's allow the stipulation. The question is whether we have any special reason to care about intuitions in Brian's sense. Brian says we do: his argument is roughly this: a reasoned-to judgment that p is not itself evidence for p; rather, it reflects the evidence upon which the reasoning is based. So we should, when investigating the evidence for p, look to the evidence on which any reasoning is based; in the relevant cases, this must be intuition.
From this methodological stance, Brian makes some fairly sweeping claims about philosophical methodology and experimental philosophy, emphasizing the need to study intuitions directly, isolating them from any influence by reasoning. This, to my mind, is a rather bizarre idea. Good reasoning, in my view, is at the center of good philosophy. So I'm pretty suspicious of any approach to methodology that wants to marginalize reasoning.
In the Q&A, I raised something like this point. I pointed out that, at least so far as Brian had said, it was open for the defender of traditional philosophical methods to deny that intuitions play the important starting-point role that Brian articulated; perhaps reasoning is ultimately where the action is. Brian's response was effectively that reasoning must have starting points, and those starting points are intuitions. But reasoning, in general, need not have starting points; sometimes, good reasoning can proceed from the null set of premises. Another audience member raised the apt example of a reductio.
Brian's response to this was effectively to allow that there might be some philosophical knowledge achievable in this way, but that the strategy would extend only to tautologies. Insofar, then, as philosophers are interested in establishing more than just tautologies, one will need intuitions as starting points. Someone following my strategy, Brian said, will not count as engaging in the traditional project he intends of substantive investigation into extramentalist subject matters.
Now I don't know what exactly Brian means by 'tautology', but it seems to me that there are two ways one can go, either of which looks fine. If tautologies are limited to, e.g., obvious logical truths, then there is no reason to accept that good reasoning, without intuitions, can yield only tautologies. For good reasoning need not be limited to logical reasoning. I think that one can reason, for instance, from 'S knows that p' to 'p'; this kind of reasoning can underwrite the knowledge, from no premises, that knowledge is factive. And I don't see why this couldn't extend to all of that philosophy which is plausibly a priori. If, on the other hand, Talbot wants to call claims like these tautologies, then it'll just turn out that philosophers sometimes discover interesting tautologies.
Saturday, April 09, 2011
False Intuition and Justification
Suppose somebody has a false intuition about an a priori matter. Is she justified in believing its content? Many plausible answers, of course, will begin with "it depends...". On what does it depend?
Ernie Sosa thinks that among the things upon which it depends is whether the false intuition derives from "some avoidably defective way"; such errors constitute "faults, individual flaws, or defects." (I think Sosa means these two quoted bits to be approximately equivalent, or at any rate, to apply together in the relevant cases.) Sosa thinks this is what is going on when somebody follows her strong inclination to affirm the consequent, inferring from q and (if p, q) to p. By contrast, "the false intuitions involved in deep paradoxes are not so clearly faults, individual flaws, or defects. For example, it may be that they derive from our basic make-up, shared among humans generally, a make-up that serves us well in an environment such as ours on the surface of our planet."
So Sosa's line is that false intuitions do not justify when they derive from faults, flaws, and defects, but do justify when they derive from our basic make-up and are generally shared among humans. I'm suspicious that this distinction will hold up to scrutiny. I think there may be an equivocation on the relevant kinds of 'faults,' 'flaws,' and 'defects' going on. In one sense, of course, one is flawed by virtue of being incorrect; beliefs are supposed to be true, so if one goes wrong, that constitutes some sort of defect. This, of course, cannot be what Sosa has in mind. Instead, he seems to be imagining flaws as deviations from some sort of imperfect but generally effective strategy for getting around in the world. This is, perhaps, the more ordinary sense of a defect. My computer, even when it is working properly, will occasionally crash; a tendency to crash constitutes a defect only when it is not working properly. And maybe there is a good reason why humans ought to have tendencies to accept, for instance, naive set theory.
The problem for this line is that there is also plausibly sound reason for humans to have tendencies to commit more obvious errors, like affirming the consequent. Given the environments we face, having a tendency to affirm the consequent will help us to recognize patterns and confirm hypotheses; inductive reasoning generally looks a bit like affirming the consequent. Similarly with other standard errors; they derive from heuristics that are generally helpful.
So we face a dilemma for upholding Sosa's distinction. Do we say that these errors — these false judgments arising from generally good heuristics — constitute defects or not? If not, then they are relevantly like Sosa thinks the intuitive premises involved in deep paradoxes are. If so, what makes them so, and why should they not apply also to the cases of the paradoxes?
Consider three people. First, the possible über-rational being who looks at me the way the fallacious gambler now looks to me. She describes us both as defective; as failing to live up to the standards of rationality. She can see that I am not tempted by one particular error (the gambler's fallacy) — but also that I regularly commit another (fallacy X), and have some attraction to a third (naive set theory). Second, myself: I think of the fallacious gambler as defective, but of myself and my peers, I think our attraction to naive set theory as nondefective; my more ignorant peers who have not studied philosophy, I even consider justified. (We will suppose I have Ernie's views.) Third, the gambler himself, who accepts his characteristic fallacy and naive set theory alike, and sees no defect in any of us. He considers himself justified in both cases.
All parties agree that the gambler is wrong; he proceeds in a defective way inconsistent with intuitive justification. But Ernie thinks I'm importantly different from him. Ernie thinks that I am not defective, but merely have some tendencies to affirm falsehoods that derive from my general human nature. Our rational superior, presumably, thinks of me as defective in just the same way as the gambler, but to a lesser degree. Does Ernie give any reason we should think her wrong about this?
Ernie Sosa thinks that among the things upon which it depends is whether the false intuition derives from "some avoidably defective way"; such errors constitute "faults, individual flaws, or defects." (I think Sosa means these two quoted bits to be approximately equivalent, or at any rate, to apply together in the relevant cases.) Sosa thinks this is what is going on when somebody follows her strong inclination to affirm the consequent, inferring from q and (if p, q) to p. By contrast, "the false intuitions involved in deep paradoxes are not so clearly faults, individual flaws, or defects. For example, it may be that they derive from our basic make-up, shared among humans generally, a make-up that serves us well in an environment such as ours on the surface of our planet."
So Sosa's line is that false intuitions do not justify when they derive from faults, flaws, and defects, but do justify when they derive from our basic make-up and are generally shared among humans. I'm suspicious that this distinction will hold up to scrutiny. I think there may be an equivocation on the relevant kinds of 'faults,' 'flaws,' and 'defects' going on. In one sense, of course, one is flawed by virtue of being incorrect; beliefs are supposed to be true, so if one goes wrong, that constitutes some sort of defect. This, of course, cannot be what Sosa has in mind. Instead, he seems to be imagining flaws as deviations from some sort of imperfect but generally effective strategy for getting around in the world. This is, perhaps, the more ordinary sense of a defect. My computer, even when it is working properly, will occasionally crash; a tendency to crash constitutes a defect only when it is not working properly. And maybe there is a good reason why humans ought to have tendencies to accept, for instance, naive set theory.
The problem for this line is that there is also plausibly sound reason for humans to have tendencies to commit more obvious errors, like affirming the consequent. Given the environments we face, having a tendency to affirm the consequent will help us to recognize patterns and confirm hypotheses; inductive reasoning generally looks a bit like affirming the consequent. Similarly with other standard errors; they derive from heuristics that are generally helpful.
So we face a dilemma for upholding Sosa's distinction. Do we say that these errors — these false judgments arising from generally good heuristics — constitute defects or not? If not, then they are relevantly like Sosa thinks the intuitive premises involved in deep paradoxes are. If so, what makes them so, and why should they not apply also to the cases of the paradoxes?
Consider three people. First, the possible über-rational being who looks at me the way the fallacious gambler now looks to me. She describes us both as defective; as failing to live up to the standards of rationality. She can see that I am not tempted by one particular error (the gambler's fallacy) — but also that I regularly commit another (fallacy X), and have some attraction to a third (naive set theory). Second, myself: I think of the fallacious gambler as defective, but of myself and my peers, I think our attraction to naive set theory as nondefective; my more ignorant peers who have not studied philosophy, I even consider justified. (We will suppose I have Ernie's views.) Third, the gambler himself, who accepts his characteristic fallacy and naive set theory alike, and sees no defect in any of us. He considers himself justified in both cases.
All parties agree that the gambler is wrong; he proceeds in a defective way inconsistent with intuitive justification. But Ernie thinks I'm importantly different from him. Ernie thinks that I am not defective, but merely have some tendencies to affirm falsehoods that derive from my general human nature. Our rational superior, presumably, thinks of me as defective in just the same way as the gambler, but to a lesser degree. Does Ernie give any reason we should think her wrong about this?
Wednesday, March 30, 2011
Concepts and Survey Results
I'm thinking about a point that Ernie Sosa has made in response to survey-based experimental philosophy challenges. As we all know, some critics have argued that certain experimental results challenge traditional armchair philosophy. In particular, for example, Weinberg, Nichols, and Stich found that there seemed to be a systematic divergence of epistemic intuitions depending upon the ethnic background of the subjects studied: students of East Asian descent were more likely than students of European descent to, for instance, describe Gettier cases as cases of knowledge.
Here's a line that Ernie has pressed a few times now:
(That's quoted from his contribution to the recent Stich and His Critics volume.)
As I'd understand it, the core suggestion here is this: maybe there's no real disagreement here; some group of subjects say that such and such 'is a case of knowledge,' while philosophers and other subjects say that such and such is not a case of knowledge, and there's no genuine disagreement, because the former subjects don't mean knowledge by 'knowledge'.
So here's my question. (One question, anyway. I have a few more.) What does any of this have to do with concepts? As I understand it, it's a question about meaning and reference: what does the word 'knowledge' refer to in a given subject's mouth? One can run a little detour through concepts if one wants: word meanings are concepts; the concepts are different; so the word is ambiguous. But what, if anything, does this 'conceptual ascent' contribute? I rather suspect that it does more to distract than to help. Steve Stich's response to Sosa emphasizes concepts in a way that looks to me largely irrelevant:
But Fodor's theory of concepts is not a theory of word meanings. What bearing does it have on whether there might be an Asian-American idiolect in which 'knowledge' means something other than knowledge? (I do mean this as a serious question; I'm less fluent in Fodor than I'd like.)
To my mind, the sort of view that Ernie needs to be worrying about is not Fodor's but Burge's. More on that in a future post, I think. For now, just this question: is anything usefully gained by thinking about Sosa's suggestion here in terms of concepts?
Here's a line that Ernie has pressed a few times now:
And the disagreement may now perhaps be explained in a way that casts no doubt on intuition as a source of epistemic justification or even knowledge. Why not explain the disagreement as merely verbal? Why not say that across the divide we find somewhat different concepts picked out by terminology that is either ambiguous or at least contextually divergent? On the EA side, the more valuable status that a belief might attain is one that necessarily involves communitarian factors of one or another sort, factors that are absent or minimized in the status picked out by Ws as necessary for “knowledge.” If there is such divergence in meaning as we cross the relevant divides, then once again we fail to have disagreement on the very same propositions. In saying that the subject does not know, the EAs are saying something about lack of some relevant communitarian status. In saying that the subject does know, the Ws are not denying that; they are simply focusing on a different status, one that they regard as desirable even if it does not meet the high communitarian requirements important to the EAs. So again we avoid any real disagreement on the very same propositions. The proposition affirmed by the EAs as intuitively true is not the very same as the proposition denied by the Ws as intuitively false.
(That's quoted from his contribution to the recent Stich and His Critics volume.)
As I'd understand it, the core suggestion here is this: maybe there's no real disagreement here; some group of subjects say that such and such 'is a case of knowledge,' while philosophers and other subjects say that such and such is not a case of knowledge, and there's no genuine disagreement, because the former subjects don't mean knowledge by 'knowledge'.
So here's my question. (One question, anyway. I have a few more.) What does any of this have to do with concepts? As I understand it, it's a question about meaning and reference: what does the word 'knowledge' refer to in a given subject's mouth? One can run a little detour through concepts if one wants: word meanings are concepts; the concepts are different; so the word is ambiguous. But what, if anything, does this 'conceptual ascent' contribute? I rather suspect that it does more to distract than to help. Steve Stich's response to Sosa emphasizes concepts in a way that looks to me largely irrelevant:
There is a vast literature on concepts in philosophy and in psychology (Margolis and Laurence 1999; Murphy 2002; Machery forthcoming), and the question of how to individuate concepts is one of the most hotly debated issues in that literature. While it is widely agreed that for two concept tokens to be of the same type they must have the same content, there is a wide diversity of views on what is required for this condition to be met. On some theories, the sort of covert ambiguity that Sosa is betting on can be expected to be fairly common, while on others covert ambiguity is much harder to generate. For Fodor, for example, the fact that an East Asian pays more attention to communitarian factors while a Westerner emphasizes individualistic factors in applying the term ‘knowledge’ would be no reason at all to think that the concepts linked to their use of the term ‘knowledge’ have different contents (Fodor 1998).
But Fodor's theory of concepts is not a theory of word meanings. What bearing does it have on whether there might be an Asian-American idiolect in which 'knowledge' means something other than knowledge? (I do mean this as a serious question; I'm less fluent in Fodor than I'd like.)
To my mind, the sort of view that Ernie needs to be worrying about is not Fodor's but Burge's. More on that in a future post, I think. For now, just this question: is anything usefully gained by thinking about Sosa's suggestion here in terms of concepts?