Tuesday, April 29, 2014

More on the well of knowledge norms

Dustin Locke has published a response to my Thought article, "Knowledge Norms and Acting Well". My paper (draft here) argued that lots of counterexample-based arguments against knowledge norms of practical reasoning take a problematic form: generating a case where it seems like S knows that p, but where it seems like S is not in a strong enough epistemic position to phi. These verdicts together tell us nothing interesting unless we assume some story about the relationship between p and phi; but defenders of knowledge norms needn't and shouldn't accept many such relationships.

For example, in Jessica Brown's widely-discussed surgeon case, it is thought to be intuitive that before double-checking the charts, (a) the surgeon knows that the left kidney is the one to remove; but (b) the surgeon ought not to operate before double-checking the charts. This is only a problem for the idea that one's reasons are all and only what one knows if the proposition the left kidney is the one to remove would, if held as a reason, be a sufficient reason to justify operating before double-checking the charts. But why should one think that?

Dustin resists my argument at several points. I'm not sure what to say in response to many of them; I think they're helpfully clarifying the sources of disagreement, but they don't make me feel any worse about my point of view. For example, Dustin seems to be happy to rest on certain kinds of very theoretical intuitions, like the intuition that the surgeon isn't justified in using the proposition that the left kidney is the bad one as a reason that counts in favour of removing the left kidney immediately. I don't have this intuition, and I wouldn't want to trust it if I did. I feel pretty good about intuitions about what actions are ok in what circumstances, but deeply theoretical claims like these don't seem to me to be acceptable dialectical starting places.

In what I found to be the most interesting part of his paper, Dustin also constructs a version of Brown's surgeon case where, if one assumes that (a) a Bayesian picture of practical rationality is correct and (b) practical reasons talk translates into the Bayesian talk by letting one conditionalize on one's reasons, we can derive the intuition mentioned above. I think that both of these assumptions are very debatable, but I also think that the case Dustin tries to stipulate is more problematic than he assumes. He offers the following stipulations:
  1. The surgeon cares about, and only about, whether the patient lives.
  2. The surgeon has credence 1 that exactly one of the patient's kidneys is diseased, and a .99 degree of credence that it is the left kidney.
  3. If the surgeon performs the surgery without first checking the chart, she will begin it immediately; if she first checks the patient's chart, she will begin the surgery in one minute.
  4. The surgeon has credence 1 that were she to check the chart, she would then remove the correct kidney.
  5. If the patient has the correct kidney removed during the operation, then there are the following probabilities that he will live, depending on how soon the surgery begins: (5a) If the surgery begins immediately and the correct kidney is removed, there is a probability of 1 that the patient will live; (5b) If the surgery begins in one minute and the correct kidney is removed, there is a probability of .999 that the patient will live.
  6. If the patient has the wrong kidney removed during the operation, then the probability that he will live is 0.
(This list is quoted directly.) I have two worries. First, Dustin also says of the case that "it's quite plausible that the surgeon knows that the left kidney is diseased", and assumes that she does. But this requires a very substantive epistemological and psychological assumption about the relationship between credence and knowledge. It is not at all innocent to assume that knowledge is consistent with non-maximal credence like this. For lottery-related reasons, Dustin is probably committing himself to the denial of multi-premise closure here. (Indeed, for reasons like the ones Maria Lasonen-Aarnio has emphasized, he may very well commit himself to denying single-premise closure.) That's not a completely crazy thing to end up being committed to, but I think it substantially mitigates the rhetorical force of an argument against me here. Similarly, there are probably good reasons to deny that the surgeon outright believes that the left kidney is diseased under these circumstances, either for conceptual/metaphysical reasons (see e.g. Brian Weatherson's "Can we do without pragmatic encroachment" or Roger Clarke's "Belief is credence one (in context)" or for psychological reasons (e.g. Jennifer Nagel's "Epistemic anxiety and adaptive invariantism"). If any of these views are right, the Dustin is committing to knowledge without outright belief.

My second worry concerns stipulation number 1: this is a surgeon who cares only about the life of the patient. From a realistic point of view, this is a very strange surgeon. According to Dustin's stipulations, the surgeon cares nothing at all about any of the following: whether she follows hospital procedure; whether she sets a good example for the students observing; whether she acts only on propositions that she knows; whether she is proceeding rationally. These strong assumptions are not idle; if we allow that she cares about any of these things, the utility calculus will not require her to go without checking, even when she conditionalizes on the content of her knowledge that the left kidney is diseased. (Suppose she cares about whether she acts only on that which she knows, and that she doesn't know whether she knows; then there is a substantial risk of the negative outcome of acting on something she doesn't know.) But these very strange assumptions will make our intuitions harder to trust. When we try to imagine ourselves in her position, we naturally assume she cares about the ordinary things people might care about. Stipulating that she only cares about one thing—not even mentioning the many other things we have to remember to disregard—makes it very hard to get into her mindset. So I'm inclined to mistrust intuitions about so heavily-stipulated a case.