Sunday, December 27, 2009

What is purism? What are epistemic standards?





I'm reading Fantl and McGrath's new knowledge book. An important thesis of the book is that of Impurism. Impurism is defined in chapter one as the denial of Purism, given thus:
(Purism about Knowledge) For any subjects S1 and S2, if S1 and S2 are just alike in their strength of epistemic position with respect to p, then S1 and S2 are just alike in whether they are in a position to know that p.

Impurism is also defined, a bit differently, in chapter two:
(Impurism) How strong your epistemic position must be -- which purely epistemic standards you must meet -- in order for a knowledge-attributing sentence, with a fixed content in a fixed context of use, to be true of you varies with your circumstances. (35)

Impurism, Fantl and McGrath think, is a counterintuitive claim; adopting it is, according to the central argument of the book, the heavy cost that is worth paying for fallibilism. I have a hard time understanding why, if it really is so counterintuitive, people find it so. I'm also far from convinced that people do find it so. Purism and impurism are technical notions that require a fairly sophisticated background in epistemology to understand. Furthermore, they are given here in terms of the far from explicit notion of 'purely epistemic standards' and 'strength of epistemic position'. What factors influence the 'strength of one's epistemic position'? Intuitively, what one knows is of great relevance to the strength of one's epistemic position, but Fantl and McGrath cannot be using the term in a way that licenses this intuitive verdict; otherwise purism would be trivially true. They must have some notion other than the intuitive one in mind. What is it? And do we really have intuitions about it?

At points, Fantl and McGrath describe the factors that count towards strength of epistemic position as 'truth-relevant' factors. (I think that DeRose also used this gloss in his characterization of 'intellectualism', which I think is just meant to be the same thing as F&M's 'purism'.)  This is meant to rule in facts about the actual or probable truth of the indicated belief, and to rule out facts like what is salient to the subject or how much is at stake for her. That's some progress -- but is it clear enough? It is, I think, meant to be consistent with purism that whether a subject knows depends on whether she is proceeding responsibly in forming her belief. (It'd better be, because that's a traditional view, and purism is supposed to include that tradition.) Is this factor 'truth-relevant'? I guess it's supposed to be. We could rely on a principle like this: if a belief is responsibly formed, then it is likely to be true.

Similarly, it's meant to be consistent with purism that whether a subject knows depends on features of her environment -- even those that don't affect the truth value of her belief. For example, whether a subject knows that there is a barn in front of her depends in part on whether there are barn façades nearby. Presumably, this is roped in under truth-relevance by the effect of such circumstances on the reliability of (a certain specification of) the subject's belief-forming process, which is correlated with truth.

But if a connection that weak is sufficient to count responsibility and environmental features as truth-relevant, then it's hard to see why it shouldn't also count in knowledge, and thus, if the views developed by e.g. Fantl & McGrath, Stanley, etc. are right, make practical situations truth-relevant. Your stakes, like your environment, play a role in determining what you know, and knowledge, like responsibility and reliability, is strongly connected to truth. In what sense is such a stakes-sensitive view 'impurist'? In what sense are stakes disconnected from 'purely epistemic standards'?

I don't even understand what purism amounts to, if it's not the triviality that this line of reasoning suggests. And I certainly don't have intuitions about purism. Therefore, I have a hard time seeing what all the fuss is about. (Interesting sidenote: it looks like common ground among at least many SSI types and at least many contextualists that one of the motivations for contextualism is to maintain purism. I'm a contextualist who has never had anything like that motivation; indeed, it looks pretty incomprehensible to me.)


Tuesday, December 01, 2009

Experimental Philosophy and Apriority

I've just written up an abstract for a paper I'm thinking about writing on the bearing of x-phi on the alleged apriority of philosophy. Short answer: there is none—even if the x-phi critics are right about the need for philosophers to be doing more science. I've posted it on the Arché Methodology Blog; I'd welcome any comments on it over there.

Tuesday, November 24, 2009

Contextualism, Intellectualism, and Ignorant Third Persons

It's a little bit natural to think that 'knows' contextualism and the shifty kind of invariantist that's sometimes called an 'SSI theorist' or an 'IRI theorist' come to a bit of an intuitive draw considering two kinds of third-person knowledge attributions. High Howie has whatever features you think makes it harder to know, or makes 'know' express a stronger relation: he's thinking about skeptical possibilities, it's really important to him whether p, or whatever. Low Louie is just the opposite: to Louie, p's no big deal, he's not worried about it, whatever. High Howie says things like "I don't know that p," while Low Louie says things like "I know that p," and both utterances look pretty good, even though in some sense Howie and Louie look to be in identical epistemic situations -- they have the same evidence, or something like that.

(Now I happen to think that it's not at all clear how to make sense of that last stipulation. This basically amounts to a worry whether there is any correct generalization characterizing the difference between the shifty SSI-like views from 'classical invariantism'. But I'm setting that aside for now, assuming, as is usual in this discussion, that the sense in which Howie and Louie are in the same 'epistemic position' is tractable -- and does not at least really trivially entail that they're alike with respect to knowledge. I'll here use 'epistemic position' technically to mean the stuff that traditional invariantists affirm, but shifty people deny, comprise a supervenience base for knowledge.)

Monday, November 23, 2009

Asserting Kp v p

Keith DeRose accepts something like the knowledge norm of assertion -- although as a contextualist, he can't have it entirely straightforwardly. He at least thinks this much: the assertability conditions for S for 'p' are the same as the truth conditions for 'I know p' in S's mouth. He takes it to be obvious that these are different than the assertability conditions for 'I know p'.

Now I'm one of those weirdos who is actually a bit sympathetic to the KK principle. So I'm interested in his argument. I find it pretty uncompelling. DeRose writes:
Both equations of standards -- (1) those for properly asserting that p with those for properly asserting that one knows that p, and (2) those for properly asserting that one knows that p with those for actually knowing that p -- are mistaken, as I trust the considerations below will show to anyone who has deliberated over close calls about whether one is positioned well enough to claim to know that p or should cool one's heels and only assert that p. (The Case for Contextualism, 103.)

I won't continue with his argument, because I'm already not on board. I'm pretty sure I've never deliberated over a close call about whether I was in a good enough situation to assert that I know p, or whether I should cool my heels and only assert that p. Indeed, that strikes me as a totally bizarre thing to do. DeRose himself says that to assert that p is, in some sense, to represent oneself as knowing p.

I know that there are strong theoretical reasons for denying KK, and for accepting the knowledge norm of assertion, and I see that those two verdicts together predict that there will be cases like these in which one can assert p but not that one knows p. But to take such cases as a clear starting point strikes me as bizarre; this, to me, is a cost that I'll accept if I'm forced to. But it's by no means obvious that this ever happens. "p but I don't know whether I know p" is not good.

Am I off base here? Do you ever consider whether Kp is warrantedly assertable, or whether to just stick with the safer p? I don't. (I know I don't.)

Thursday, November 19, 2009

What is infallibility supposed to be?

This week I'm thinking about Laurence Bonjour's In Defense of Pure Reason. In §4.4, Bonjour offers what he takes to be a very straightforward argument against the infallibility of rational insight: just look, he says, at all the examples of alleged cases of rational insight that are false — some have been empirically refuted, and some been shown a priori to be incoherent, and some are just inconsistent with others in a way that guarantees that at least some are false.

He qualifies the charge of fallibility, recognizing that it's open to deny that such cases of seeming rational insight into something that ends up being fall are genuine rational insights at all; this, he says, is a "mere terminological or conceptual stipulation" and "fails to secure infallibility in any epistemologically useful sense".

I don't see what infallibility was ever going to amount to if it was to be something stronger than what Bonjour thinks is an uninteresting sense. Did or could anybody ever have thought that anything that seemed to be a rational insight thereby guaranteed its truth? Descartes, for example, recognized the possibility that human reason might be deceived by a God or demon, or imperfectly designed, such that it led into error.

What is the correct characterization of a strong form of infallibility?

Monday, November 16, 2009

Could there be Reductive Knowledge First?

Timothy Williamson has famously defended these two claims:

(1) Knowledge cannot be analyzed

(2) Knowledge can play lots of important explanatory roles all over the place

These two claims, if true, give us reason to think about the role of knowledge very differently; use it to explain things, instead, of as something we're trying to explain. Call this project -- the one recommended in the previous sentence -- 'knowledge first.' The question I'm wondering about right now is, what is the relationship between (1) and knowledge first? Does the case for knowledge first depend on the case for (1)?

(Cards on the table: I'm a guy who thinks that (1) and (2) are both true and that knowledge first is a good idea. So I'm engaging now in a fairly academic question about what depends on what.)

Surely (1) and (2) are consistent (modulo possible worries about whether it's possible to analyze anything). 'Prime number' can be analyzed if anything can, but this is no obstacle to our using the notion of a prime number to explain various phenomena in the world -- for example, in theorizing about encryption algorithms.

Suppose (2) is true. The case for (2) would presumably consist largely of examples -- cases in which we got good explanatory payoff by invoking knowledge. That's the sort of thing that makes up the latter two thirds of Knowledge and Its Limits. And suppose (1) were unestablished, or even known to be false. Wouldn't (2) all by itself make a strong case for knowledge first?

Monday, October 26, 2009

Sunday, October 25, 2009

Hawthorne against contextualist 'inappropriate'

John Hawthorne gives an argument that contextualists about knowledge face considerable pressure to be contextualists about terms that refer to things widely thought to be linked to knowledge, like 'is epsitemically permitted to assert' or 'relies inappropriately upon in one's practical reasoning'. I'm inclined to agree. He argues, however, that it's not at all plausible to treat words like 'inappropriately' as in the relevant way context-sensitive. Now actually, I'm sort of inclined to agree with that, too, although I'm not at all sure he's right. What I am pretty sure of is that his argument for this conclusion is pretty bad.

Suppose I'm in an everyday context and have pretty good evidence for the true p, such that 'I know that p' is true in my context. Then I go ahead and assert that p and rely on p in my practical reasoning. Now you come along in a more skeptical context where 'Jonathan knows that p' is false. Now you want to say, 'Jonathan ought not to have asserted p' and 'relying on p was inappropriate.' Hawthorne says that a contextualist treatment of this latter is implausible:
Assertability conditions and propriety conditions for practical reasoning just don't seem to vary in that way. The practical reasoning considered above is inappropriate, regardless of what an ascriber is attending to, and parallel remarks apply to the propriety of flat-out assertions of lottery propositions (in the setting envisaged). (90)

I can't read this as anything but a pretty blatant use/mention confusion. The relevant kind of contextualist can explain and predict that “the practical reasoning above is inappropriate, regardless of what an ascriber is attending to” is true. It’s exactly the same reason why, pace Lewis when he’s being sloppy, it’s not a result of contextualism about 'knows' that whether S knows p depends in part on what an ascriber is attending to. That, again, is just the same reason why it’s not true that whether you are female depends on whom I’m talking to. Your gender has nothing to do with me; neither, according to contextualism about ‘knows’, does whether you know p. And neither, according to the hypothetical view under consideration under which ‘inappropriate’ is context-sensitive, does whether your action is inappropriate depend on anything about me.

Saturday, October 24, 2009

Counterfactuals and Knowledge

I'm on the record as thinking there are tight connections between counterfactuals and knowledge.

Robbie Williams, in his "Defending Conditional Excluded Middle," denies this. At least, he argues for a strong disconnect between them. Robbie argues, among other things, that there are strong reasons to accept both (A) and (B):
(A) If I were to flip a fair coin a billion times, it would not land heads a billion times.

(B) If I were to flip a fair coin a billion times, it would not be knowable that it would not land heads a billion times.

Since, Robbie says, (A) and (B) are both true, it can't be that (A) entails the negation of (B) -- therefore Bennett's view, which connects knowledge and counterfactuals in a way implying that entailment, is false. Robbie's argument for (A) is that rejecting it would require rejecting the truth of too many of our ordinary counterfactuals, since they enjoy no stronger metaphysical grounds than those for (A) -- since there's a genuine physical probability of really wacky things happening all the time, we have nothing better than this kind of probabilistic connection between antecedent and consequent in lots of counterfactuals that we want to maintain.

The way Robbie puts the point is that denying (A) would be to commit oneself to an error theory, since it would make our ordinary judgments about ordinary counterfactuals wrong all the time. This move seems to me a bit odd; to my ear, (A) does not look obviously true. Indeed, it looks like we should reject it. That's not to say I can't be moved by an argument in favor of it -- I can -- but if we're in the game of respecting pre-theoretic intuitions, it seems to me that to accept (A) is to embrace something of an error theory, too. We can make it worse if we make the problematic possibility more salient:
(A*) If I were to flip a fair coin a billion times, the possibility of its landing heads a billion times would not be the one to become actual.

If you agree with me that (A*) is equivalent to (A), and that (A*) sounds false, then you must likewise agree with me that Robbie, in embracing (A), commits to a bit of error theory himself. That's not to say it's therefore a bad view; it's just to say that we're already in the game of weighing various intuitive costs. It's not so simple as error theories are bad, therefore (A) must be true.

(Another observation: Robbie thinks it'd be bad to deny (A) because it would make us deny the truth of many ordinary counterfactuals, which play important roles in philosophy. He writes:
Error-theories in general should be avoided where possible, I think; but an error-theory concerning counterfactuals would be especially bad. For counterfactuals are one of the main tools of constructive philosophy: we use them in defining up dispositional properties, epistemic states, causation etc. An error-theory of counterfactuals is no isolated cost: it bleeds throughout philosophy.

Perhaps this is right. But if it is true that counterfactuals play really important roles in construction of philosophical theories, then it's not just their truth that matters -- it's also our knowledge of them. So a view that preserves many of these counterfactuals as true, but that leaves us with very little knowledge about counterfactuals, seems to have a lot of what is problematic in common with the error theory Robbie discusses.)

Robbie gives three arguments for (B). I'll discuss the first two in this blog post; I think that they have analogues against (A).

Friday, October 23, 2009

Epistemic Modals and Contextualism

Here's an insanely simple argument for contextualism about knowledge. I think it's sound, although I'm not sure I'd expect many people to be persuaded by it. I'd be interested in hearing about how readers might think it best to resist it.

Here's premise one. Epistemic modals are intimately connected to knowledge in something like the following way: it might be that p iff the relevant base of knowledge doesn't entail that not-p. This is pretty much standard, I think, although of course people argue about just which knowledge base is the relevant one. This much looks like common ground, for instance, in the debate between contextualists and relativists about epistemic modals. What's at issue there is how the relevant knowledge base gets fixed -- whose knowledge counts. If you need an argument for this connection, just reflect on the absurdity of "it might be that p, but I know that not-p" and "I don't know that p, but it must be that p". (I'm assuming the duality of might and must.)

Here's premise two. In many situations, both of the following obtain: (a) were someone to say "I know that p", that utterance would be accommodated and accepted as true; (b) were someone to say "it might be that not-p", that utterance would be accommodated and accepted as true. For example, in my current situation, I could truly assert "I know that Derek will respond to Paul (because that's what the workshop schedule indicates)". Alternatively, I could truly assert "Derek might not respond to Paul (because it's possible that he'll get sick during the lunch break and have to go home)." (Of course, I can't go both ways; in no context can I say both things; the point is that in some contexts I could say either.)

These two premises, all by themselves, put the invariantist in hot water. Take one of the situations described in premise two, and suppose invariantism is true. In that situation, by premise two, "S knows that p" expresses a truth in some context; therefore, by invariantism, it expresses a truth in all contexts. But, by premise one, "S knows that p" is inconsistent with "it might be that not-p". So this modal claim must be false in all contexts—against the stipulation of premise two.

The obvious solution, from my point of view, is contextualism about 'knows'. Then we can maintain the connection between 'knows' and 'might' in all contexts, and have each sentence true in some context. I don't see any option nearly so appealing for the invariantist. But this argument is so simple that it can't be decisive. So what should the invariantist say?

Sunday, October 04, 2009

Knowledge norm of practical reasoning

In chapter 1 of Knowledge and Lotteries, John Hawthorne introduces the knowledge norm of practical reasoning: "At a rough first pass, one ought only to use that which one knows as a premise in one's deliberations." (p.30) He then immediately qualifies this principle in two ways with this footnote (fn.77):
Qualification 1: "In a situation where I have no clue what is going on, I may take certain things for granted in order to prevent paralysis, especially when I need to act quickly."

Qualification 2: "If I am in a situation where the difference between 'Probably p' and 'p' is irrelevant to the case at hand, I may use 'p' as a basis on which to act even though I only know that probably p."

I don't see why either of these are true in a sense that demands qualification of the rough first pass given above. With regard to qualification 1, let's suppose I'm in a situation where I have no clue what is going on. This isn't literally plausible, of course; in any situation in which my actions are rationally evaluable, I'll have some clue what's going on. So I suppose this must be understood as a kind of exaggeration. Perhaps, for instance, I suddenly find myself being charged at by a rhinoceros, and have no idea how I got there. It's clear, however, that if I just stand pat I will be very shortly gored to death. There's a button nearby with no label; I don't really have any clue what it is or what it does. But it may well be rational to press it, since that's the only real option I have and it's clear that if I do nothing, I will die. Maybe the button will open a trap door for rhino or for me—who knows? It's worth a try.

The thing is, all the premises that I'm using, if I so reason, are things I know. I know that there's a rhino; I know that I'll die if I do nothing; I know that there's a button; I know that maybe if I press the button I'll survive. So I don't see what pressure cases in which I have to act under extreme uncertainty put on the rough principle stated.

Similarly, if I'm in a situation where the difference between 'probably p' and 'p' is irrelevant to the case at hand, why think that I'm using 'p' as a basis to act? We've just stipulated that 'probably p' will do just as well—why not say I'm acting on that known proposition?

I don't really see what Hawthorne is up to in this footnote. (I suspect these issues may be developed in the later paper with Jason Stanley -- I read that a couple of years ago, but need to have another look.)

Wednesday, September 30, 2009

Generality of Gettier Judgments

I'm teaching a contemporary epistemology course with Yuri to Honours students this year. We started with Linda Zagzebski's "The Inescapability of Gettier Problems", which, to my mind, helpfully turns attention away from attempts to analyze knowledge on which students may have spent much of their intro epistemology courses. I read it a few years ago, and found it totally convincing; I read it again this week, and found it totally convincing again, but noticed that the argument wasn't nearly so straightforward as I'd thought it was. In fact, I'm not sure what it is. (But I still find it compelling.)

Here's what Zagzebski says. She understands Gettier as having refuted the JTB theory thus: imagine a case in which JB but not T. Now change the case so that T, but just by luck -- not in a way connected to JB. Now you have a Gettier case -- an intuitive counterexample to K = JTB. That's what she said Gettier did. Then she says we can generalize the argument. Her target is any view that tries to analyze knowledge as T + X, where X doesn't entail T. Do just the same thing, she says, as Gettier: take a case in which X and not T (guaranteed possible), then tweak the case so as to make T true in a way unrelated to X.

(One might worry here as to whether this latter step is always possible. Juan Comasaña told me via Twitter that he wants to resist the argument here. I have a hard time seeing how it couldn't be done, for any X that's plausibly natural enough to figure into an analysis. We'd need X to be consistent with not-T, but for X & T together to entail that X and T are closely connected. That seems, at least, really weird. Maybe there's an argument lurking that this is impossible? Or maybe it's possible after all? I'm not sure. Thoughts? Anyway, this isn't the point I wanted to press.)

Ok, so, modulo the parenthetical, we've generated a case according to Zagzebski's recipe. Now, she tells us, we have a counterexample to the K = T + X theory. She offers:
...a general rule for the generation of Gettier cases. ... Make the element of justification (warrant) strong enough for knowledge, but make the belief false. ... Now emend the case by adding another element of luck, only this time an element which makes the belief true after all. The second element must be independent of the element of warrant so that the degree of warrant is unchanged. ... We now have a case in which the belief is justified (warranted) in a sense strong enough for knowledge, the belief is true, but it is not knowledge.

What's interesting about this passage is that she's making a general claim about the ultimate outcome of all instances of her argument schema. But the original Gettier argument, it is traditionally thought, depends on a particular sort of judgment about a particular case; we think about the story about Smith and Jones and Brown in Barcelona, and see that this is a case of JTB without K. If that's right, then it's totally mysterious how Zagzebski or anyone could be confident that the same pattern will hold of other attempts to analyze. But the argument isn't a non sequitor; it's (at least) prima facie compelling. Why?

At a workshop on thought experiments I attended in Brazil this summer, Anna-Sara Malmgren suggested that thought experiment judgments carry with them a kind of implicit generality that is best explained by their being products of nonconscious inferential reasoning. This, it seems, might be just the sort of case to support her suggestion. Our initial Gettier judgment constituted a kind of commitment to a general principle that rules out the kind of luck that Zagzebski is focusing on. If that's right, then metaphilosophical emphasis on cases may be misplaced; lots more of our thought experiment judgments may be more based on theory than is always realized. Without a move like that, it's hard for me to see how Zagzebski's argument could make any sense.

Saturday, July 25, 2009

DeRose on 'knowledge' norm of assertion

In his 2002 paper "Assertion, Knowledge, and Context," Keith DeRose gave an argument for contextualism about 'knows' that took basically this form: knowledge is the norm of assertion; assertability varies according to context; therefore, knowledge varies according to context.

This was a pretty confused argument -- though of course this is much clearer in retrospect, with the advantage of years of engagement with SSI. The problem is that contextualism is a thesis about the word 'knows', not about knowledge, while 'knowledge is the norm of assertion' seems like it must be a thesis about knowledge, not about English. In fact, something like a knowledge norm for assertion, combined with the observation that what you're allowed to assert depends on your situation, provides a pretty good argument for SSI; I take it to be exactly parallel to the main argument for SSI that Stanley and Fantl and McGrath give.

In chapter 3 of his new book The Case for Contextualism, DeRose essentially reproduces the content of that 2002 paper, but he does add about two new pages of material designed to correct this aspect of the original. Now, in contrast to earlier, he recognizes the need to clarify the statement of the knowledge norm of assertion, if it is to be understood in contextualist terms. He gives us:
The Relativized Knowledge Account of Assertion (KAA-R): A speaker, S, is well-enough positioned with respect to p to be able to properly assert that p if and only if S knows that p according to the standards for knowledge that are in place as S makes her assertion. (99)

Wednesday, July 22, 2009

Philosophy Book Review Tips?

I'm reviewing a book for the first time; do any philosophers have tips on how to plan/organize/read/etc.? This is all new to me, and I'd welcome any advice from veterans on how to proceed. Do you like to take notes along the way? Should I plan on reading cover-to-cover more than once? How do you decide what to focus on?

Tuesday, July 21, 2009

Is Imagination A Priori?

Is Imagination A Priori? Draft of 21 July, 2009. Will be subsumed into a longer piece.
Sometimes, we come to new knowledge via imaginative processes; plausibly, sometimes, such imagination plays an indispensably warranting role. Is such a role for imagination inconsistent with the apriority of our new knowledge? Stephen Yablo has argued that a certain kind of imaginative engagement, ‘peeking’, is relevantly like reliance on perceptual experience, and thus precludes apriority. I argue that Yablo’s case against the apriority of peeking is not compelling.

Sunday, July 05, 2009

How rich is truth in fiction?

According to orthodoxy, what's true in a fiction goes beyond what's entailed by the text making up the story. Although fictions are gappy (there's no fact about whether Hamlet had an even number of hairs), some things are determinately true without being stated, or being entailed by thugs that are stated (Hamlet was not a leprechaun). This orthodoxy is pretty much universal, I think, and I've relied on it in my work on thought experiments.

In the past few months, I've worried a bit about that orthodoxy. I don't think orthodoxy here should be abandoned, but I do think it faces an important challenge that hasn't, to my knowledge, been articulated before. The challenge begins with a consideration of non-fiction.

Not all non-fiction is true; some works of non-fiction are mistaken, and some are fraudulent. (All biographies are non-fiction, but not all biographies are true.) What determines whether a non-fiction is true? The key to the challenge is this: we can and should distinguish between whether a work of non-fiction is true, and whether it is merely misleading. I could write a very deceptively misleading biography of David Lewis, such that anyone who read it would walk away with rampant false beliefs about him. But if I did so using only true sentences, relying on pragmatic implicatures and natural assumptions to generate the misleading nature of my non-fiction, then, I claim, the biography I have written is true.

Now take a fiction made up of just the same sentences I used in my misleading autobiography of Lewis. This is just the sort of situation where, according to orthodoxy, principles of generation for truth in fiction will generate false propositions and add them to the set of fictional truths. But this, given what we've said in the previous paragraph, is inconsistent with the truism that contents of fictions don't work in ways radically different from those of non-fictions. A non-fiction's content is true if its sentences are. Can we really deny that a fiction, sentence-by-sentence identical with a non-fiction, has true content if its corresponding non-fiction does? That's the puzzle.

Here, as I see them, are the options:

  1. Reject orthodoxy. What's true in the fiction does not, after all, go beyond what's given in the literal text.

  2. Posit a stark disanalogy. Their obvious forms of similarity notwithstanding, fictions and non-fictions get content in radically divergent ways.

  3. Bifurcate 'content'. (Brian Weatherson suggested this to me when I posed the puzzle to him.) Agree with the conclusion about 'content' of fictions in some sense, while insisting that there's a richer 'true in the fiction' that goes beyond content.


I guess I'm inclined to agree with Brian that, of these choices, (3) is the best way to go. But I'd be interested to hear if anyone thinks I'm selling the other possibilities short, or have overlooked additional possible solutions.

Saturday, July 04, 2009

Papers on Intuitions

Intuitions and Begging the Question is now under review. Check it out if you're interested in reading what I think about intuitions, and making me wish I'd asked you for comments on it before submitting it.

My next project: making revisions to Explaining Away Intuitions.

Here, incidentally, is where I have all my papers online now.

Tuesday, June 30, 2009

When do I know what I'm like when aroused?

I'm reading, and enjoying, Dan Ariely's book Predictably Irrational, which catalogues a number of systematic ways in which human economic decisions fall short of the sort of ideal that traditional economic theory assumes. Some, but nothing close to all, of the data was already familiar to me, and I've always been interested and impressed by the relevant experiments -- as, for example, cases in which A is preferred among {A, B}, but where B is favored -- including favored over A -- among {A, B, C}. Ariely has some interesting things to say about applications of this sort of data, both in obvious places (advertising) and in unobvious ones (courtship).

On the whole, I think I'd recommend the book. But I do think that Ariely badly misfires in his Chapter 5, "The Influence of Arousal: Why Hot Is Much Hotter Than We Realize." The main thesis of this chapter is that humans grossly underestimate the effects of future sexual arousal on future decision-making. For example, in their 'cool' state, humans tend to predict that they will behave, while aroused, in ways more responsible and moral than they in fact do. This thesis is eminently plausible, and Ariely is right about its implications for, for instance, ideal sex education. My problem with his discussion is that his experiments don't remotely establish his claim, but he pretends that they do.

As I said, his claim looks pretty plausible anyway, so criticizing his experiments and presentation is in some sense intellectual. That, obviously, isn't about to stop me.

Sunday, June 28, 2009

Thought Experiments Lecture Slides

I'm kicking off the Arché Summer School this year; here are the slides for my talk. (PowerPoint) (pdf)

(This is mostly designed for the attendees, although I guess it's conceivable that others could find them interesting. I don't have a handout; instead, I have a URL where interested parties can look at the slides, quotes, references, etc. in more detail.)

New Sidebar Features

I've added two kinda neat things to my sidebar: "Current Research Topic" and "Currently Reading". So if for whatever reason you're wondering what philosophical issues I happen to be thinking about on any given day, or what book I have in progress, my sidebar's a good place to look.

I'm trying to figure out a way to get the updates there to show up in my twitter feed, but that is proving less than trivial. I've given up for the moment, but maybe I'll try again later.

Tuesday, June 23, 2009

Real-World Deviant Gettier Case

Something cool happened in our methodology seminar last week. Some people like to remark on real-world Gettier cases they find themselves in. I found myself last week in the presence of a real-life deviant Gettier case.

A deviant Gettier case (what Ben Jarvis and I have also called a 'bad Gettier case') is a situation in which the literal text used to describe a Gettier situation is satisfied, but in such a way so as to fail to provide a counterexample to JTB=K. Deviant Gettier cases play a central role in a disagreement Ben and I have with Timothy Williamson. What's cool about this deviant Gettier case is that (a) although I played a central role in producing it, I did so entirely without design, and (b) it's deviant with respect to one of the standard paradigms of Gettier cases.

Here's what happened.

Sunday, June 21, 2009

Allegedly inconsistent knowledge principles

Matt Weiner argues that 'our use of the word "know" is best captured by' an inconsistent set of inference rules. His setup strikes me as strange. He writes:
These are the Knowledge Principles:
(Disquotational Principle)  An utterance of “S knows that p” at time t is true iff at time t S knows-tenseless that p.
(Practical Environment Principle)  S’s evidence concerning p is good enough for knowledge iff S’s evidence for p is good enough to make it epistemically rational for her to act on the assumption that p.
(Parity of Evidence Principle) If the evidence concerning p for S and T is the same, then S’s evidence is good enough for knowledge iff T’s evidence is good enough for knowledge.

The Knowledge Principles are inconsistent, given only the truism that different people can have different practical stakes. Take a Bank Case (DeRose 1992), in which Hanna and Leila each have the same rather good evidence that the bank is open Saturday, but acting on a mistaken belief would harm Hannah much more than Leila.  Hannah is in a high-stakes context, Leila in a low-stakes context.  The Practical Environment Principle, which entails that Leila knows that the bank is open and Hannah does not, here generates an inconsistency with the Parity of Evidence Principle, which entails that Leila knows if and only if Hannah does.

Two things strike me as really strange about this claim, even setting aside the question of whether these principles are plausibly constitutive of the meaning of 'knows'.

Saturday, June 20, 2009

All reasoning is deductive

Brian recently wondered whether philosophy is deductive or somehow ampliative. I don't think I believe in ampliative inference. I think that all reasoning is deductive.

By 'deductive inference,' I mean inferences where the premises entail the conclusion, and one is led to accept the conclusion on the basis of the believed premises. (I'll limit this to inference in belief, although I think there's a broader important notion that is neutral on the attitude in question.) I'll use 'ampliative reasoning' to refer to reasoning that is not deductive; where one concludes something that goes 'above and beyond' what was given in the premises.

Suppose I see that Herman has an iPhone, and come to believe on this basis that Herman has an object. It is very natural in this instance to represent my reasoning deductively:
Herman has an iPhone.
Therefore, Herman has an object.

(I don't much mind if you want to include a tacit premise to the effect that iPhones are objects. Put it in or leave it out, as you like.)

Some reasoning, however, is commonly thought to be ampliative. Just which cases are like this is a matter of some controversy. One might think that ordinary perceptual judgments are like that:
It appears to me as if I have pocket kings.
Therefore, I have pocket kings.

Or maybe standard cases of induction are like that:
Torfinn got angry the last twenty times someone mentioned two-dimensionalism.
Therefore, Torfinn will get angry the next time someone mentions two-dimensionalism.

I think there's generally thought to be a strong intuitive sense in which it is correct to formalize these arguments as ampliative, rather than deductive. But I just don't see it. These ampliative bits of reasoning are easily recast as deductive ones. One way to do this is to add to each a tacit premise at least as strong as the material conditional from original premise to conclusion. Another way is to take the inferences as being run against the background assumption that such a bridging principle holds. (I'm not sure how different these two ways are.) Either way, I'm trying to make sense of the intuitive idea that, in inferring Q from P, one demonstrates one's commitment to the material conditional P > Q. One cannot conclude that Q on the basis of P while regarding it as an open question whether it might be the case that P and ~Q.

Insisting that all reasoning is deductive will, I think, get us out of some messy problems. (Without going into detail here, I'm thinking about closure iterations, easy knowledge, and bootstrapping.) There must be some reason it's not the obvious choice, but I don't see what it is. What reason do we have to avoid positing tacit premises like these?

Site update

I finally got around to changing some things around on this site; I'm no longer embarrassed by it, although it still needs work. Let me know if you see anything wrong, or if you have suggestions for content/design/whatever.

Rational Imagination and Modal Knowledge

Rational Imagination and Modal Knowledge, with Benjamin Jarvis. Forthcoming in Nous.
How do we know what's (metaphysically) possible and impossible?  Kripke-Putnam considerations suggest that possibility is not merely a matter of (coherent) conceivability/imaginability.  For example, we can coherently imagine that Hesperus and Phosphorus are distinct objects even though they are not possibly distinct.  Despite this apparent problem, we suggest, nevertheless, that imagination plays an important role in an adequate modal epistemology.  When we discover what is possible or what is impossible, we generally exploit important connections between what is possible and what we can coherently imagine.

Quantifiers, Knowledge, and Counterfactuals

Quantifiers, Knowledge, and Counterfactuals, forthcoming in Philosophy and Phenomenological Research.
Many of the motivations in favor of contextualism about knowledge apply also to a contextualist approach to counterfactuals. I motivate and articulate such an approach, in terms of the context-sensitive ‘all cases’, in the spirit of David Lewis’s contextualist view about knowledge. The resulting view explains intuitive data, resolves a puzzle parallel to the skeptical paradox, and renders safety and sensitivity, construed as counterfactuals, necessary conditions on knowledge.

Explaining Away Intuitions

Explaining Away Intuitions, (2010) in Studia Philosophica Estonica, special issue on intuitions. Refer to published version, available here.
What is it to ‘explain away’ an intuition? Philosophers often attempt to explain intuitions away, but it is often unclear what the success conditions for their project consist in. I attempt to articulate these conditions, using several philosophical case studies as guides. I will conclude that explaining away intuitions is a more difficult task than has sometimes been appreciated; I also suggest, however, that the importance of explaining away intuitions has often been exaggerated.

Quantifiers and Epistemic Contextualism

Quantifiers and Epistemic Contextualism, Version of 25 May, 2010. Forthcoming in Philosophical Studies.
I defend a neo-Lewisean form of contextualism about knowledge attributions. Understanding the context-sensitivity of knowledge attributions in terms of the context-sensitivity of universal generalizations provides an appealing approach to knowledge. Among the virtues of this approach are solutions to the skeptical paradox and the Gettier problem. I respond to influential objections to Lewis’s account.

Knowing the Intuition and Knowing the Counterfactual

Knowing the Intuition and Knowing the Counterfactual, (2009) Philosophical Studies, 145(3), September 2009: 435-443. Please refer to published version here. For a Philosophical Studies book symposium on Timothy Williamson's The Philosophy of Philosophy. See also Williamson's response here.
I criticize Timothy Williamson’s characterization of thought experiments on which the central judgments are judgments of contingent counterfactuals. The fragility of these counterfactuals makes them too easily false, and too difficult to know.

Dreaming

Dreaming, with Ernest Sosa. Forthcoming in The Oxford Companion to Consciousness.

Thought-Experiment Intuitions and Truth in Fiction

Thought-Experiment Intuitions and Truth in Fiction, with Benjamin Jarvis. (2009) Philosophical Studies 142 (2), January 2009: 221-246. Please refer to published version, available online here.
What sorts of things are the intuitions generated via thought experiment? Timothy Williamson has responded to naturalistic skeptics by arguing that thought-experiment intuitions are judgments of ordinary counterfactuals. On this view, the intuition is naturalistically innocuous, but it has a contingent content and could be known at best a posteriori. We suggest an alternative to Williamson’s account, according to which we apprehend thought-experiment intuitions through our grasp on truth in fiction. On our view, intuitions like the Gettier intuition are necessarily true and knowable a priori. Our view, like Williamson’s, avoids naturalistic skepticism.

Scepticism and the Imagination Model of Dreaming

Scepticism and the Imagination Model of Dreaming. (2008) The Philosophical Quarterly, 58 (232), July 2008: 519–527 doi:10.1111/j.1467-9213.2007.546.x  Penultimate draft; please refer to published version, available online here.
Ernest Sosa has argued that the solution to dream skepticism lies in an understanding of dreams as imaginative experiences – when we dream, on this suggestion, we do not believe the contents of our dreams, but rather imagine them.  Sosa rebuts skepticism thus: dreams don’t cause false beliefs, so my beliefs cannot be false, having been caused by dreams.
I argue that, even assuming that Sosa is correct about the nature of dreaming, belief in wakefulness on these grounds is epistemically irresponsible. The proper upshot of the imagination model, I suggest, is to recharacterize the way we think about dream skepticism: the skeptical threat is not, after all, that we have false beliefs. So even though dreams don’t involve false beliefs, they still pose a skeptical threat, which I elaborate.

Intuitions and Begging the Question

Intuitions and Begging the Question. Under Review. Version of 4 July, 2009.
What are philosophical intuitions? There is a tension between two intuitive criteria. On the one hand, many of our ordinary beliefs do not seem intuitively to be intuitions; this suggests a relatively restrictionist approach to intuitions. (A few attempts to restrict: intuitions must be noninferential, or have modal force, or abstract contents.) On the other hand, it is counterintuitive to deny a great many of our beliefs—including some that are inferential, transparently contingent, and about concrete things. This suggests a liberal conception of intuitions. I defend the liberal view from the objection that it faces intuitive counterexamples; central to the defense is a treatment of the pragmatics of ‘intuition’ language: we cite intuitions, instead of directly expressing our beliefs via assertion, when we are attempting to avoid begging questions against certain sorts of philosophical interlocutors.

Sosa on Virtues, Perception, and Intuition

Sosa on Virtues, Perception, and Intuition. Version of 19 January, 2009.
I critically evaluate Ernest Sosa's (2007) contrast between intuitive justification and perceptual justification. I defend a competence-based approach to intuitive justification that is continuous with epistemic justification generally.

Who Needs Intuitions?

Who Needs Intuitions? Two Experimentalist Critiques Version of 9 January, 2011. To appear in Booth and Rowbottom, (eds.) Intuitions, Oxford University Press.


A number of philosophers have recently suggested that the role of intuitions in the epistemology armchair philosophy has been exaggerated. This suggestion is rehearsed and endorsed. What bearing does the rejection of the centrality of intuition in armchair philosophy have on experimentalist critiques of the latter? I distinguish two very different kinds of experimentalist critique: one critique requires the centrality of intuition; the other does not.

Dreaming and Imagination

Dreaming and Imagination, (2009) Mind and Language, 24 (1), February 2009: 103-121. Please refer to published version, available online here.

I argue, on philosophical, psychological, and neurophysiological grounds, that contrary to an orthodox view, dreams do not typically involve misleading sensations and false beliefs. I am thus in partial agreement with Colin McGinn, who has argued that we do not have misleading sensory experience while dreaming, and partially in agreement with Ernest Sosa, who has argued that we do not form false beliefs while dreaming. Rather, on my view, dreams involve mental imagery and propositional imagination. I defend the imagination model of dreaming from some objections.

Friday, June 19, 2009

Experimentalist Pressure Against Traditional Methodology

Experimentalist Pressure Against Traditional Methodology, Version of 11 September. Under review.
According to some critics, traditional armchair philosophical methodology relies in an illicit way on intuitions. But the particular structure of the critique is not often carefully articulated—a significant omission, since some of the critics arguments for skepticism about philosophy threaten to generalize to skepticism in general. More recently, some experimentalist critics have attempted to articulate a critique that is especially tailored to affect traditional methods, without generalizing too widely. Such critiques are more reasonable, and more worthy of serious consideration, than are blunter critiques that generalize far too widely. I argue that a careful (empirical!) examination of extant philosophical practices shows that traditional philosophical methods can meet these more reasonable challenges.

Sunday, June 07, 2009

Country Cooking from Central France: Roast Boned Rolled Stuffed Shoulder of Lamb (Farce Double)

Country Cooking from Central France: Roast Boned Rolled Stuffed Shoulder of Lamb (Farce Double), read by Isaiah Sheffer on PRI's Selected Shorts.

A posteriori knowledge of a priori abilities

Stephen Yablo argues that knowledge of things like shapes, insofar as they depend on visual imagination, cannot be a priori. Here is one of his arguments:
[S]ome imagined reactions are a better guide to real reactions than others. Imagined shape reactions are a good guide, you say, and you are probably right. But it is hard to see how the knowledge that they are a good guide could be a priori. If the mind’s eye sees one sort of property roughly as real eyes do, while its take on another sort of property tends to be off the mark, that is an empirical fact known on the basis of empirical evidence. I know not to trust my imagined reactions to arrangements of furniture, because they have often been wrong; now that I see the wardrobe in the room, I realize it is far too big. It is only because they have generally been right that I am entitled to trust my imagined judgments of shape. (458)

I think, however, that there is a conflation here between knowledge and knowledge of knowledge. It is a controversial thesis that, in order to know via the deliverances of an evidential source, one must know that source to be reliable. But even if that controversial thesis is true, it doesn't entail that in order for one to know a priori via the deliverance of some source, one must know a priori that that source is reliable; that would be a much stronger claim, and not obviously a plausible one.

Wednesday, June 03, 2009

Blog up and running (I hope)

Sorry for all the technical difficulties in the past week. It now appears that my blog works. I'm still hoping to import my posts from my old blog, but, I'm sorry to say, I'm becoming decreasingly optimistic. We'll see. Expect things to shift around here quite a bit over the next few weeks; my coding skills are mediocre at best, so there's a bit of trial and error involved, but I sort of have an idea now how I'll set things up around here.