Some people have asked me why I am speaking out publicly about Brian Leiter’s threat of litigation. If the only thing I cared about was getting Brian Leiter to leave me and my wife alone, silence would probably be the prudent response to the letter we received. But I think the philosophical community is entitled to know when freedom of speech within it is under threat. That's why it's important to me that Leiter's threatening actions are brought to light.
The things we have said about Brian Leiter constitute protected speech. They were not misleading, and we stand by them. If Leiter carries out his threat to sue, we will vigorously defend. We will also have the right to counterclaim against Leiter for false and defamatory statements he has made about us during the last year, including those contained in the letter from his lawyer he published and his commentary on it.
Monday, December 29, 2014
Wednesday, December 24, 2014
A Recent Email from Brian Leiter's Canadian Lawyer
On December 15, Professor Brian Leiter notified me and my wife Carrie Jenkins, through a Toronto lawyer, that he "is prepared to seek redress in the courts of Canada" against us over various Internet postings which he alleges defame him.
Professor Leiter claims to be defamed by:
- Carrie’s pledge on her tumblr blog to behave with civility towards other philosophers and colleagues;
- Carrie’s post to Facebook of the complete text of Professor Leiter’s email of July 2, 2014 regarding that pledge;
- the so-called “September Statement”; and
- the post on the Feminist Philosophers blog entitled “Sometimes An Apology Doesn’t Help.”
His Toronto lawyer has demanded that Carrie and I publish on the Internet, for a continuous period of at least six months, a lengthy apology and retraction (which his lawyer drafted). If we do not, we are warned, Professor Leiter “will pursue his legal remedies against [me and my wife]” and “perhaps others among the original signatories to the ‘September Statement’.” We are also warned that Professor Leiter’s Canadian lawsuit against me and my wife will involve “a full airing of the issues and the cause or causes of [Carrie’s] medical condition;” a reference, it would seem, to posted information about the impact of Professor Leiter’s actions on Carrie’s health, her capacity to work, and her ability to contribute to the public discourse as a member of the profession.
Carrie and I have instructed our lawyer to inform Brian Leiter that anything we have posted about him on the Internet is lawful free speech under Canadian law and under the First Amendment to the US Constitution.
Monday, October 20, 2014
High School Popularity: A Modest Proposal
One of the difficult things about high school is figuring out which people to be friends with. After all, it can make a big difference in your life! Being friends with popular kids is a good way to become more more popular yourself. Plus, your teachers might treat you better, you'll get more cool stuff (from being friends with the popular kids, who are also often the richer ones), etc. In the status quo, however, freshmen often enter high school without a very clear idea of who the popular kids are, so they're aiming their friendship aspirations pretty haphazardly.
But here's an idea for an enterprising popular kid to provide an invaluable service to everybody. He gets a group of his friends together, and they rate everybody in the school (or at least everyone they think is at least minimally popular) for popularity. Then he can make the results known to the whole school, free of charge! Now everyone worth thinking about trying to be friends with comes along with a numerical popularity rating. Sure, everybody's going to try to be friends with that one girl who was already a 4.8, and she doesn't have the time or energy to be friends with everyone, but since she's the most popular, it makes sense for her to be able to be the most selective about choosing her friends. And aren't the best candidates for friends the ones who deserve to have access to the most popular kid's friendship?
Now I'm the first to admit this system won't be perfect. The popular kids are likely to get ever more popular, since everyone will know that they're the people to try to be friends with. And of course everyone will have some incentive to be friends with that one kid who started the rating system, and with the raters that make up his circle of friends. (The latter can be mitigated somewhat if that first kid occasionally makes changes to the roster of kids who do the ratings.) So yeah, maybe there are better possible systems. But in the status quo, people are just trying to guess who's most popular by asking a couple of people or -- even more unfairly -- by judging by superficial cues like race and attractiveness and athletic ability. How is that fair.
Friday, October 10, 2014
Introspective and Reflective Distinguishability
Mooreans, including neo-Mooreans, think that we know lots of ordinary stuff, and that we also—maybe on this basis—know the denials of extraordinary skeptical scenarios. Duncan Pritchard defends a particular disjunctivist brand of neo-Mooreanism, according to which, in cases of successful perception, one has reflective access to factive reasons of the form I see that p, and perceptual knowledge based on such reasons. So for instance, when one looks at red wall under ordinary circumstances:
If I wanted to be a neo-Moorean of broadly Duncan's style (something I might well want to do), I'd just deny S, along with the many other skeptical intuitions that come out false on this view. But Duncan doesn't want to go that way; as this passage indicates, he considers S and claims like it to be 'undeniable truths'. (On p. 92 he even says that disjunctivists in particular are "unavoidably committed to denying that agents can introspectively distinguish" between the relevant cases.) I confess I don't see why it's so important to hold on to this particular skeptical intuition while happily rejecting others, such as the intuition that an ordinary person at the zoo doesn't know that she isn't looking at a cleverly disguised mule.
How does Duncan go about resolving the tension between his disjunctivism and S? By leaning on the 'by introspection' qualifier. He does think that, if one in the good case, one can reason thus, resulting in knowledge of the conclusion: "I have factive reason R. Only in the good case would I have factive reason R. Therefore, I'm in the good case." But, he says, this is consistent with intuitions like S, which are about introspective abilities. And while one may be able to tell by introspection what reasons one has, one cannot tell by introspection that factive reasons obtain only in the good cases. This is something one can come to know by a priori reflection, but not by introspection. (And maybe the same goes for the epistemic standing of the inference from the two premises to the conclusion.)
This is ultimately a much milder concession to skeptical intuitions than at first it appeared. Although he preserves the letter of his interpretation of the claim that we can't introspectively distinguish the good cases from the bad cases, he does so by pointing out that "introspectively" is a stronger qualifier than one might have realised. He does think (p. 95) that one can reflectively distinguish between good and bad cases, where reflective distinguishability is the ability to know distinct base on a combination of introspection and a priori reasoning.
So two thoughts. First the smaller one: is it really right to exclude a priori reasoning from the considerations that establish 'introspective distinguishability'? It's very hard for me to even make sense of just what that constraint is. (In The Rules of Thought, Ben and I argue that we can't divorce any kind of thought from a priori reasoning.) Consider these two cases: (1) I am presented in ordinary circumstances with a blue ball. (2) I am presented in ordinary circumstances with a black ball. Given the way my perceptual faculties work, we should consider these cases to be distinguishable in the relevant sense if any are. But is it clear that I can know them to be distinct without using a priori reasoning? It's not like the proposition that they're distinct is made available to me directly via introspection. Instead, I have introspective access to how one case looks, and to how another case looks, and I observe that they're different. From this I infer, using something like Leibniz's law, that they're distinct.
Second, supposing Duncan is right about introspective distinguishability: maybe this just shows that the worry wasn't properly articulated in the first place. I submit that someone motivated by the kinds of skeptical pressures that would drive someone to say that you can't tell good cases and bad cases apart by introspection, isn't going to feel better if you allow a priori reasoning along with introspection. The key skeptical intuition in the first place was just that it shouldn't be that easy to tell the good cases and the bad cases apart. And there's no getting around it: that's just an intuition that disjunctivists need to deny. Once we come to appreciate this fact, I'm not sure how important it is to conform to the letter of certain idiosyncratic statements of the intuition.
- One sees that the wall is red.
- One has reflective access to the fact that one sees that the wall is red.
- One knows that the wall is red on the basis of the fact that one sees that the wall is red.
Since Duncan also accepts a closure principle on knowledge, he accepts:
- One knows that the wall isn't a white wall illuminated by red light.
Like all forms of Mooreanism, Duncan's view is in tension with certain skeptical intuitions. For example, it is in tension with this intuition:
(S) One can't tell by introspection that one is faced with a red wall rather than a white wall with red light.
As Duncan puts it,
If, in the non-deceived case, one has reflective access to the relevant factive reason as epistemological disjunctivism maintains, then why doesn't it follow that one can introspectively distinguish between the non-deceived and deceived cases after all, contrary to intuition? ... In short, the problem is that it is difficult to see how epistemological disjunctivism can square its claim that the reflectively accessible reasons in support of one's perceptual knowledge can nonetheless be factive with the undeniable truth that there can be pairs of cases like that just described [ordinary perceptual cases and corresponding deceptions] which are introspectively indistinguishable. (21)(Duncan defines 'introspective indistinguishability' as the inability to know by introspection alone that the cases are distinct. (p. 53))
If I wanted to be a neo-Moorean of broadly Duncan's style (something I might well want to do), I'd just deny S, along with the many other skeptical intuitions that come out false on this view. But Duncan doesn't want to go that way; as this passage indicates, he considers S and claims like it to be 'undeniable truths'. (On p. 92 he even says that disjunctivists in particular are "unavoidably committed to denying that agents can introspectively distinguish" between the relevant cases.) I confess I don't see why it's so important to hold on to this particular skeptical intuition while happily rejecting others, such as the intuition that an ordinary person at the zoo doesn't know that she isn't looking at a cleverly disguised mule.
How does Duncan go about resolving the tension between his disjunctivism and S? By leaning on the 'by introspection' qualifier. He does think that, if one in the good case, one can reason thus, resulting in knowledge of the conclusion: "I have factive reason R. Only in the good case would I have factive reason R. Therefore, I'm in the good case." But, he says, this is consistent with intuitions like S, which are about introspective abilities. And while one may be able to tell by introspection what reasons one has, one cannot tell by introspection that factive reasons obtain only in the good cases. This is something one can come to know by a priori reflection, but not by introspection. (And maybe the same goes for the epistemic standing of the inference from the two premises to the conclusion.)
This is ultimately a much milder concession to skeptical intuitions than at first it appeared. Although he preserves the letter of his interpretation of the claim that we can't introspectively distinguish the good cases from the bad cases, he does so by pointing out that "introspectively" is a stronger qualifier than one might have realised. He does think (p. 95) that one can reflectively distinguish between good and bad cases, where reflective distinguishability is the ability to know distinct base on a combination of introspection and a priori reasoning.
So two thoughts. First the smaller one: is it really right to exclude a priori reasoning from the considerations that establish 'introspective distinguishability'? It's very hard for me to even make sense of just what that constraint is. (In The Rules of Thought, Ben and I argue that we can't divorce any kind of thought from a priori reasoning.) Consider these two cases: (1) I am presented in ordinary circumstances with a blue ball. (2) I am presented in ordinary circumstances with a black ball. Given the way my perceptual faculties work, we should consider these cases to be distinguishable in the relevant sense if any are. But is it clear that I can know them to be distinct without using a priori reasoning? It's not like the proposition that they're distinct is made available to me directly via introspection. Instead, I have introspective access to how one case looks, and to how another case looks, and I observe that they're different. From this I infer, using something like Leibniz's law, that they're distinct.
Second, supposing Duncan is right about introspective distinguishability: maybe this just shows that the worry wasn't properly articulated in the first place. I submit that someone motivated by the kinds of skeptical pressures that would drive someone to say that you can't tell good cases and bad cases apart by introspection, isn't going to feel better if you allow a priori reasoning along with introspection. The key skeptical intuition in the first place was just that it shouldn't be that easy to tell the good cases and the bad cases apart. And there's no getting around it: that's just an intuition that disjunctivists need to deny. Once we come to appreciate this fact, I'm not sure how important it is to conform to the letter of certain idiosyncratic statements of the intuition.
Wednesday, September 24, 2014
Some thoughts about the PGR and Brian Leiter
In academic year 2002/03, I was finishing my undergraduate degree at Rice University, and I decided I was interested in applying to grad school in philosophy. Like many undergraduate philosophy majors, I knew next to nothing about the discipline of philosophy—I just knew that I'd enjoyed my philosophy courses, and done well in them, and I wanted more. The ideal circumstance, of course, would have been if someone with intimate knowledge of a wide variety of philosophy departments sat down with me for many hours and helped me to select a number of possible good fits. That was impossible, in my case and in most cases, for many reasons. I was exactly the kind of person the Philosophical Gourmet Report was meant to help. One of my professors pointed me to it, and I used it as a starting point for my research into grad school. It was an extremely useful resource, and I would have been worse off without it. So I agree with the people who have recently written to Brian Leiter, thanking him for creating what is a useful professional service.
Since then, as I have gotten to know the profession more intimately, I have become aware of many concerns about the PGR. Some of them, I think, like the weirdly strategic aspect with which some departments make hire in an attempt to raise themselves in the rankings, are an accidental result of the PGR's large success and influence. I also recognise that there are appropriate concerns about the PGR's methodology, and that it has a tendency to amplify problematic biases about who is and isn't a good philosopher, and what is and isn't a 'core' area of philosophy. I understand why some philosophers think that the PGR does more harm than good. But I do think that it fills what continues to be a genuine need in the profession. I don't really have better advice for a student trying to take the first steps to think about where to apply to grad school than to look at the PGR. Unless and until there is a better source of information available, the PGR remains useful and important.
But the other thing that I have come to realise, as I have gotten to understand the workings of professional philosophy better, is that Brian Leiter has a tremendous influence in the profession, in significant part because of his role as founder and editor of the PGR. And while he often channels his influence in what I consider to be positive directions, he also has engaged in a harmful pattern of bullying and silencing of those who disagree with him. If he were 'just any' philosopher saying mean things about people, this would be rude (and, in my view, unacceptable) but only marginally harmful. But in a culture in which philosophers are afraid to voice dissent against such a powerful individual, the harm is magnified tremendously. I do not think that Leiter himself understands the stifling and silencing effect that his words have on the less powerful people in the profession. In the most recent high-profile instance I have in mind, as most readers will already know, the target was my wife, Carrie Jenkins. Carrie wrote a widely celebrated statement, in wholly general terms, about the importance of philosophers treating each other respectfully. Brian Leiter—who had not previously been in correspondence with Carrie—interpreted this as a criticism of him personally, and wrote Carrie an insulting email, which had significant stifling and intimidating effects. In my opinion, this is not only unacceptable behaviour, but an abuse of the powerful position that Leiter finds himself in. And although the situation with Carrie is the one I am the most familiar with, it seems clear from discussions with others that this kind of bullying, silencing behaviour represents a pattern. That is why I have signed on to this statement (update: here), publicly declaring that I will not assist in the production of the PGR while it is under Brian Leiter's control. I am an untenured junior member of the profession, and have never been asked to contribute to the PGR, but I consider public statements like this important, especially in this context where fear of becoming the object of a negative Leiter campaign is so prevalent. It is important that other philosophers see that if they take a stand, they will not be alone. I am happy to see that many much more prominent philosophers than I—including at least one person who was on the PGR advisory board last week—have also signed.
I remain ambivalent about the PGR itself. As indicated above, I think it plays an important role. Perhaps something else could play that role in a better way, but unless and until such something exists, I think that the PGR itself does good. But in the status quo, where it makes everyone afraid of Brian Leiter, there is serious harm that comes along with that good. It is time for that harm to stop. The best solution for now would be for the PGR to proceed without its founder.
Since then, as I have gotten to know the profession more intimately, I have become aware of many concerns about the PGR. Some of them, I think, like the weirdly strategic aspect with which some departments make hire in an attempt to raise themselves in the rankings, are an accidental result of the PGR's large success and influence. I also recognise that there are appropriate concerns about the PGR's methodology, and that it has a tendency to amplify problematic biases about who is and isn't a good philosopher, and what is and isn't a 'core' area of philosophy. I understand why some philosophers think that the PGR does more harm than good. But I do think that it fills what continues to be a genuine need in the profession. I don't really have better advice for a student trying to take the first steps to think about where to apply to grad school than to look at the PGR. Unless and until there is a better source of information available, the PGR remains useful and important.
But the other thing that I have come to realise, as I have gotten to understand the workings of professional philosophy better, is that Brian Leiter has a tremendous influence in the profession, in significant part because of his role as founder and editor of the PGR. And while he often channels his influence in what I consider to be positive directions, he also has engaged in a harmful pattern of bullying and silencing of those who disagree with him. If he were 'just any' philosopher saying mean things about people, this would be rude (and, in my view, unacceptable) but only marginally harmful. But in a culture in which philosophers are afraid to voice dissent against such a powerful individual, the harm is magnified tremendously. I do not think that Leiter himself understands the stifling and silencing effect that his words have on the less powerful people in the profession. In the most recent high-profile instance I have in mind, as most readers will already know, the target was my wife, Carrie Jenkins. Carrie wrote a widely celebrated statement, in wholly general terms, about the importance of philosophers treating each other respectfully. Brian Leiter—who had not previously been in correspondence with Carrie—interpreted this as a criticism of him personally, and wrote Carrie an insulting email, which had significant stifling and intimidating effects. In my opinion, this is not only unacceptable behaviour, but an abuse of the powerful position that Leiter finds himself in. And although the situation with Carrie is the one I am the most familiar with, it seems clear from discussions with others that this kind of bullying, silencing behaviour represents a pattern. That is why I have signed on to this statement (update: here), publicly declaring that I will not assist in the production of the PGR while it is under Brian Leiter's control. I am an untenured junior member of the profession, and have never been asked to contribute to the PGR, but I consider public statements like this important, especially in this context where fear of becoming the object of a negative Leiter campaign is so prevalent. It is important that other philosophers see that if they take a stand, they will not be alone. I am happy to see that many much more prominent philosophers than I—including at least one person who was on the PGR advisory board last week—have also signed.
I remain ambivalent about the PGR itself. As indicated above, I think it plays an important role. Perhaps something else could play that role in a better way, but unless and until such something exists, I think that the PGR itself does good. But in the status quo, where it makes everyone afraid of Brian Leiter, there is serious harm that comes along with that good. It is time for that harm to stop. The best solution for now would be for the PGR to proceed without its founder.
Saturday, August 30, 2014
Pritchard on pragmatics of knowledge ascriptions
I'm working on a review of Duncan Pritchard's book Epistemological Disjunctivism. I'll probably try out a few ideas here over the next couple of months. I want to start out by focusing on something from near the end of the book—§8 of Part III. Here, Duncan is trying to deal with what he considers to be a challenge to the particular form of neo-Moorean disjunctivist response to the skeptical paradox he's been developing. The salient element of the view is that, contrary to skeptical intuitions, one does typically know that e.g. one is related in the normal way to the world, rather than being a brain in a vat. This, even though one lacks the ability to discriminate perceptually between being related in the normal way to the world and being a brain in a vat.
The challenge Duncan considers in this section is that Moorean assertions like "I know I'm not a brain in a vat" seem conversationally inappropriate. As he puts it earlier in the book,
Duncan's view is that Zula last utterance is true but unassertable—unassertable because it implicates falsely that Zula can discriminate perceptually between zebras and cleverly disguised mules. But why does it implicate that, if it doesn't entail it? I can't see how any of Grice's maxim's would generate the implicature in this case. Without some kind of story about where the implicature comes from, the suggestion that any impropriety comes down to pragmatics looks suspiciously ad hoc.
Notice also that certain predictions of the pragmatic explanation do not seem to be borne out. Since Duncan's story depends essentially on the implicatures involved in Zula's assertion, it does not extend to knowledge attributions that Zula doesn't assert. For example, it does not extend to Zula's unasserted thought in this case:
The challenge Duncan considers in this section is that Moorean assertions like "I know I'm not a brain in a vat" seem conversationally inappropriate. As he puts it earlier in the book,
[T]here appears to be something conversationally very odd about asserting that one knows the denial of a specific radical sceptical hypothesis. That is, even if one is willing to grant with the neo-Moorean that one can indeed know that one is not, say, a BIV, it still needs to be explained why any explicit claim to know that one is not a BIV (i.e., 'I know that I am not a BIV') sounds so conversationally inappropriate. Call this the conversational impropriety objection. (115)The answer Duncan gives to this challenge in §8 ("Knowing and Saying That One Knows") is that the Moorean claims in question, in the contexts under consideration, generate false conversational implicatures to the effect that one has the relevant discriminatory abilities:
[I]n entering an explicit knowledge claim in response to a challenge involving a specific error-possibility one is not only representing oneself as having stronger reflective accessible grounds in support of that assertion than would (normally) be required in order to simply assert the target proposition, but also usually representing oneself as being in possession of reflectively accessible grounds which speak specifically to the error-possibility raised. (142)I tend to be suspicious of pragmatic explanations for infelicity that don't come along with systematic explanations. Grice tells nice stories about how his maxims predict particular implicatures, given various contents asserted. What is Duncan's explanation for why first-person knowledge assertions implicate that one has the perceptual capacity to discriminate the state of affairs claimed to be known from alternatives that have been mentioned? Let's take an example, adapted from one of Duncan's (p. 146 -- one of his "unmotivated specific challenge" cases):
- Zula: [looking at some zebras in the zoo] There are some zebras over there.
- Asshole: They look like zebras. But who knows? Maybe they're cleverly disguised mules.
- Zula: I know that they're zebras.
Duncan's view is that Zula last utterance is true but unassertable—unassertable because it implicates falsely that Zula can discriminate perceptually between zebras and cleverly disguised mules. But why does it implicate that, if it doesn't entail it? I can't see how any of Grice's maxim's would generate the implicature in this case. Without some kind of story about where the implicature comes from, the suggestion that any impropriety comes down to pragmatics looks suspiciously ad hoc.
Notice also that certain predictions of the pragmatic explanation do not seem to be borne out. Since Duncan's story depends essentially on the implicatures involved in Zula's assertion, it does not extend to knowledge attributions that Zula doesn't assert. For example, it does not extend to Zula's unasserted thought in this case:
- Zula: [looking at some zebras in the zoo] There are some zebras over there.
- Asshole: They look like zebras. But who knows? Maybe they're cleverly disguised mules.
- Zula: [thinking to herself] What an asshole. I know that they're zebras.
Zula's thought won't mislead Asshole or anybody else, so Duncan's story can't show why it's inappropriate. But it seems intuitively problematic in the same way her original assertion is. Similarly, there seems to be impropriety about Moorean assertions in third-personal contexts where one won't mislead. Suppose that you and I know full well that Zula can't tell the difference between a real zebra and a fake zebra; we also know full well that she is looking at a real zebra right now. Consider this:
- Zula: [looking at some zebras in the zoo] There are some zebras over there.
- Asshole: They look like zebras. But who knows? Maybe they're cleverly disguised mules.
- Me: [to you, out of earshot of Z and A] Zula knows that they're zebras.
My assertion seems problematic in the same way Zula's original one does; but I do not mislead anyone. (We could also consider, for this point, a version of the first-personal case where it is stipulated to be common knowledge that Zula lacks the discriminatory ability in question.)
Here is one more observation about the case. Suppose nobody says anything about knowledge, as in this variant:
- Zula: [looking at some zebras in the zoo] There are some zebras over there.
- Asshole: They look like zebras. But maybe they're cleverly disguised mules.
- Zula: They are zebras.
Insofar as I can feel the force of Duncan's suggestion that Zula's original final utterance—'I know that they're zebras—implicates that she has special abilities to rule out fakes, I think the same applies here. But if so, I think that this may show that even if Duncan has identified something wrong with the knowledge assertion, he hasn't identified everything wrong with it. For we have no inclination whatsoever to think that Zula speaks falsely in asserting, even in the face of the skeptical challenge, that there are zebras. The case is very different for her self-ascription of knowledge. The intuition is not merely that she shouldn't say she has knowledge; it's that she doesn't. (Indeed, I think the intuition is that it'd be fine for her to assert that she doesn't have knowledge.) Since there seems to be a special phenomenon about knowledge ascriptions, the pragmatic story will only work if it is particular to knowledge ascriptions. But I don't think it is; once the challenge has been made, an outright assertion of the proposition that was challenged does—so far as I can tell, in exactly the same way a bare knowledge ascription does—in some sense convey that one has the ability to answer the challenge.
More thoughts on more central elements of Duncan's very interesting book to follow. I started here for the simple reason that it was freshest in my mind when I finished the book today.
Tuesday, April 29, 2014
More on the well of knowledge norms
Dustin Locke has published a response to my Thought article, "Knowledge Norms and Acting Well". My paper (draft here) argued that lots of counterexample-based arguments against knowledge norms of practical reasoning take a problematic form: generating a case where it seems like S knows that p, but where it seems like S is not in a strong enough epistemic position to phi. These verdicts together tell us nothing interesting unless we assume some story about the relationship between p and phi; but defenders of knowledge norms needn't and shouldn't accept many such relationships.
For example, in Jessica Brown's widely-discussed surgeon case, it is thought to be intuitive that before double-checking the charts, (a) the surgeon knows that the left kidney is the one to remove; but (b) the surgeon ought not to operate before double-checking the charts. This is only a problem for the idea that one's reasons are all and only what one knows if the proposition the left kidney is the one to remove would, if held as a reason, be a sufficient reason to justify operating before double-checking the charts. But why should one think that?
Dustin resists my argument at several points. I'm not sure what to say in response to many of them; I think they're helpfully clarifying the sources of disagreement, but they don't make me feel any worse about my point of view. For example, Dustin seems to be happy to rest on certain kinds of very theoretical intuitions, like the intuition that the surgeon isn't justified in using the proposition that the left kidney is the bad one as a reason that counts in favour of removing the left kidney immediately. I don't have this intuition, and I wouldn't want to trust it if I did. I feel pretty good about intuitions about what actions are ok in what circumstances, but deeply theoretical claims like these don't seem to me to be acceptable dialectical starting places.
In what I found to be the most interesting part of his paper, Dustin also constructs a version of Brown's surgeon case where, if one assumes that (a) a Bayesian picture of practical rationality is correct and (b) practical reasons talk translates into the Bayesian talk by letting one conditionalize on one's reasons, we can derive the intuition mentioned above. I think that both of these assumptions are very debatable, but I also think that the case Dustin tries to stipulate is more problematic than he assumes. He offers the following stipulations:
My second worry concerns stipulation number 1: this is a surgeon who cares only about the life of the patient. From a realistic point of view, this is a very strange surgeon. According to Dustin's stipulations, the surgeon cares nothing at all about any of the following: whether she follows hospital procedure; whether she sets a good example for the students observing; whether she acts only on propositions that she knows; whether she is proceeding rationally. These strong assumptions are not idle; if we allow that she cares about any of these things, the utility calculus will not require her to go without checking, even when she conditionalizes on the content of her knowledge that the left kidney is diseased. (Suppose she cares about whether she acts only on that which she knows, and that she doesn't know whether she knows; then there is a substantial risk of the negative outcome of acting on something she doesn't know.) But these very strange assumptions will make our intuitions harder to trust. When we try to imagine ourselves in her position, we naturally assume she cares about the ordinary things people might care about. Stipulating that she only cares about one thing—not even mentioning the many other things we have to remember to disregard—makes it very hard to get into her mindset. So I'm inclined to mistrust intuitions about so heavily-stipulated a case.
For example, in Jessica Brown's widely-discussed surgeon case, it is thought to be intuitive that before double-checking the charts, (a) the surgeon knows that the left kidney is the one to remove; but (b) the surgeon ought not to operate before double-checking the charts. This is only a problem for the idea that one's reasons are all and only what one knows if the proposition the left kidney is the one to remove would, if held as a reason, be a sufficient reason to justify operating before double-checking the charts. But why should one think that?
Dustin resists my argument at several points. I'm not sure what to say in response to many of them; I think they're helpfully clarifying the sources of disagreement, but they don't make me feel any worse about my point of view. For example, Dustin seems to be happy to rest on certain kinds of very theoretical intuitions, like the intuition that the surgeon isn't justified in using the proposition that the left kidney is the bad one as a reason that counts in favour of removing the left kidney immediately. I don't have this intuition, and I wouldn't want to trust it if I did. I feel pretty good about intuitions about what actions are ok in what circumstances, but deeply theoretical claims like these don't seem to me to be acceptable dialectical starting places.
In what I found to be the most interesting part of his paper, Dustin also constructs a version of Brown's surgeon case where, if one assumes that (a) a Bayesian picture of practical rationality is correct and (b) practical reasons talk translates into the Bayesian talk by letting one conditionalize on one's reasons, we can derive the intuition mentioned above. I think that both of these assumptions are very debatable, but I also think that the case Dustin tries to stipulate is more problematic than he assumes. He offers the following stipulations:
(This list is quoted directly.) I have two worries. First, Dustin also says of the case that "it's quite plausible that the surgeon knows that the left kidney is diseased", and assumes that she does. But this requires a very substantive epistemological and psychological assumption about the relationship between credence and knowledge. It is not at all innocent to assume that knowledge is consistent with non-maximal credence like this. For lottery-related reasons, Dustin is probably committing himself to the denial of multi-premise closure here. (Indeed, for reasons like the ones Maria Lasonen-Aarnio has emphasized, he may very well commit himself to denying single-premise closure.) That's not a completely crazy thing to end up being committed to, but I think it substantially mitigates the rhetorical force of an argument against me here. Similarly, there are probably good reasons to deny that the surgeon outright believes that the left kidney is diseased under these circumstances, either for conceptual/metaphysical reasons (see e.g. Brian Weatherson's "Can we do without pragmatic encroachment" or Roger Clarke's "Belief is credence one (in context)" or for psychological reasons (e.g. Jennifer Nagel's "Epistemic anxiety and adaptive invariantism"). If any of these views are right, the Dustin is committing to knowledge without outright belief.
- The surgeon cares about, and only about, whether the patient lives.
- The surgeon has credence 1 that exactly one of the patient's kidneys is diseased, and a .99 degree of credence that it is the left kidney.
- If the surgeon performs the surgery without first checking the chart, she will begin it immediately; if she first checks the patient's chart, she will begin the surgery in one minute.
- The surgeon has credence 1 that were she to check the chart, she would then remove the correct kidney.
- If the patient has the correct kidney removed during the operation, then there are the following probabilities that he will live, depending on how soon the surgery begins: (5a) If the surgery begins immediately and the correct kidney is removed, there is a probability of 1 that the patient will live; (5b) If the surgery begins in one minute and the correct kidney is removed, there is a probability of .999 that the patient will live.
- If the patient has the wrong kidney removed during the operation, then the probability that he will live is 0.
My second worry concerns stipulation number 1: this is a surgeon who cares only about the life of the patient. From a realistic point of view, this is a very strange surgeon. According to Dustin's stipulations, the surgeon cares nothing at all about any of the following: whether she follows hospital procedure; whether she sets a good example for the students observing; whether she acts only on propositions that she knows; whether she is proceeding rationally. These strong assumptions are not idle; if we allow that she cares about any of these things, the utility calculus will not require her to go without checking, even when she conditionalizes on the content of her knowledge that the left kidney is diseased. (Suppose she cares about whether she acts only on that which she knows, and that she doesn't know whether she knows; then there is a substantial risk of the negative outcome of acting on something she doesn't know.) But these very strange assumptions will make our intuitions harder to trust. When we try to imagine ourselves in her position, we naturally assume she cares about the ordinary things people might care about. Stipulating that she only cares about one thing—not even mentioning the many other things we have to remember to disregard—makes it very hard to get into her mindset. So I'm inclined to mistrust intuitions about so heavily-stipulated a case.
Tuesday, January 14, 2014
Diary of a Narcissist
This is a recent diary entry by Reginald, a confused narcissist.
Dear Diary,
I am perturbed. As you know, I've long thought that, if I'm not perfection itself, I must at least be the next best thing to it. I thank Providence every day for so far elevating me above the common man. It is no exaggeration to say that hitherto, I have counted myself among the very most beautiful and significant people in the world. But today I received a terrible shock. While searching the internet for further discussions of me, I happened across a paper by a philosopher called David Kaplan. What I found there shook my deepest convictions to the core. Kaplan argues that certain words—'demonstratives' or 'indexicals', he calls them—are context sensitive; that is to say, the referent of these terms can vary according to the conversational context in which they're used. My first thought, on reading this, was that it seemed like an interesting and plausible semantic claim. The referent of the word 'that', for example, is simply whatever it is at which my flawless finger happens to be pointing when I speak.
But that isn't all.[*] It's one thing to recognise the general semantic framework—it's quite another to make particular entries in the list of context-dependent terms. Among Kaplan's list of context-dependent terms are the very dearest and most important to me! He includes on his list, for example, such touchstones as 'I' and 'me'! Can you imagine, diary? I—Reginald the all-right—dependent on such contingencies as conversational contexts? Never in my wildest dreams would I have imagined that anyone would so trivialise me. Needless to say, I am deeply shaken. Can I really accept that I am so unimportant? That there is nothing special about me, but rather than I'm just whoever happens to be speaking in a given conversation? The thought terrifies me. Tomorrow I shall read works by Gareth Evans and Christopher Peacocke to see if they might restore me to the glory I thought I deserved.
Fondly,
Reginald