Showing posts with label belief. Show all posts
Showing posts with label belief. Show all posts

Wednesday, May 11, 2016

What is proof beyond a reasonable doubt?

As I finish up a few long-standing projects on knowledge (check out my draft monograph!) I'm hoping to start exploring a few new-for-me areas. In recent years my philosophical discussions on this blog have been relatively esoteric, trying out a new objection to so-and-so's third paper on such-and-such; when I was in grad school I used it as an early thinking-out-loud forum about more basic stuff. If I can swallow my pride enough, I may return to that form over the next several months. One area I'm quite ignorant and naive—but also curious—about is law. In particular, I'm curious about how legal concepts like evidence and testimony interact with epistemological concepts like evidence and testimony. I'm a complete novice in this field, so if I'm overlooking things that seem totally obvious to any readers, I'd be grateful to have it pointed out, especially if it comes along with reading advice.

One of my entry-points into the literature here is my UBC colleague John Woods's book Is Legal Reasoning Rational: An Introduction to the Epistemology of Law. (I'm maybe 40% of the way through a first read of it now.) Here's one thing I've found interesting so far. Woods writes, on p. 103:
Spike cannot be convicted unless a jury unanimously finds that his guilt has been proved beyond a reasonable doubt. Certainty beyond a reasonable doubt is sometimes said to be moral certainty. Under either name, as Gifis writes, it is a conviction based on persuasive reasons and excluding doubts that a contrary conclusion can exist. This is much too strong a formulation. There is plenty of case law that allows for conviction even in the face of a perfectly reasonable case to the contrary, recognized as such by the jury. On an alternative formulation of the standard. A juror is said to be morally certain of a fact when he or she would act in reliance upon its truth in matters of greatest importance to himself or herself. Although better, this appears to omit the essential requirement of criminal proof that this readiness to act, this moral certainty, be grounded in the juror's belief that the burden of criminal proof has been met. In other words, it seems not to be sufficient for criminal conviction that a juror be in a state of moral certainty of the accused's guilt. It also matters how he came to be in that state. 
So we have these notions:

  • Proof beyond a reasonable doubt
  • Certainty beyond a reasonable doubt
  • A juror's being in a state of moral certainty
Woods slides between the first two without comment. Maybe the thought is that there is this tight relationship: proofs establish certainty. (In logic or mathematics, this seems plausible—if I have a proof of something, then it is certain for me.) But as Woods points out, it's another matter whether my psychology reflects this certainty. I might have a proof but doubt anyway; or I might be irrationally certain, even absent a proof.

I wish Woods had explained further what he had in mind when he discusses appropriate conviction in the face of 'a perfectly reasonable case to the contrary'. (I am not sure who Gifis is—I don't see a citation.) Can there be 'perfectly reasonable' cases that have been proven not to be the case? If 'provability beyond a reasonable doubt' is closed under deduction, then it seems like there must be. (If it's been proven beyond a reasonable doubt that Jones committed the crime, then it's provable beyond a reasonable doubt that Jones wasn't framed.)

The alternative formulation mentioned seems to target the psychological state of being in a state of moral certainty. This feels at least analogous, maybe more, to the epistemological notion of belief or commitment. Some epistemologists, like Jennifer Nagel and Brian Weatherson, have emphasized that whether one is in this state depends (causally or constitutively, respectively) on the practical situation—when the stakes are high, one is less likely to believe. Maybe moral certainty is something like, being such that one would count as believing even in very high stakes cases. And although this state isn't, as Woods points out, sufficient for appropriate conviction, a normative connection seems plausible: one should only be in this state if one has a proof of corresponding strength. Don't be sure unless it's proven.

It's hard for me not to think in terms of knowledge. One contextualist way of mapping these thoughts would have it that, given a high standard for knowledge, all and only that which is proven beyond a reasonable doubt is 'known', and jurors should vote to convict if and only if they know the defendant guilty. I think this could make sense of these issues—although it'll probably be too simple in other respects, such as the restriction of admissible evidence. (Some things jurors know, they're not supposed to consider. (Or maybe a weird standard could count them as unknown? Something to think about.))

Anyway, I know I'm being really naive here, but I'd be interested in digging in a little bit, if anyone has ideas or reading suggestions.

Tuesday, April 29, 2014

More on the well of knowledge norms

Dustin Locke has published a response to my Thought article, "Knowledge Norms and Acting Well". My paper (draft here) argued that lots of counterexample-based arguments against knowledge norms of practical reasoning take a problematic form: generating a case where it seems like S knows that p, but where it seems like S is not in a strong enough epistemic position to phi. These verdicts together tell us nothing interesting unless we assume some story about the relationship between p and phi; but defenders of knowledge norms needn't and shouldn't accept many such relationships.

For example, in Jessica Brown's widely-discussed surgeon case, it is thought to be intuitive that before double-checking the charts, (a) the surgeon knows that the left kidney is the one to remove; but (b) the surgeon ought not to operate before double-checking the charts. This is only a problem for the idea that one's reasons are all and only what one knows if the proposition the left kidney is the one to remove would, if held as a reason, be a sufficient reason to justify operating before double-checking the charts. But why should one think that?

Dustin resists my argument at several points. I'm not sure what to say in response to many of them; I think they're helpfully clarifying the sources of disagreement, but they don't make me feel any worse about my point of view. For example, Dustin seems to be happy to rest on certain kinds of very theoretical intuitions, like the intuition that the surgeon isn't justified in using the proposition that the left kidney is the bad one as a reason that counts in favour of removing the left kidney immediately. I don't have this intuition, and I wouldn't want to trust it if I did. I feel pretty good about intuitions about what actions are ok in what circumstances, but deeply theoretical claims like these don't seem to me to be acceptable dialectical starting places.

In what I found to be the most interesting part of his paper, Dustin also constructs a version of Brown's surgeon case where, if one assumes that (a) a Bayesian picture of practical rationality is correct and (b) practical reasons talk translates into the Bayesian talk by letting one conditionalize on one's reasons, we can derive the intuition mentioned above. I think that both of these assumptions are very debatable, but I also think that the case Dustin tries to stipulate is more problematic than he assumes. He offers the following stipulations:
  1. The surgeon cares about, and only about, whether the patient lives.
  2. The surgeon has credence 1 that exactly one of the patient's kidneys is diseased, and a .99 degree of credence that it is the left kidney.
  3. If the surgeon performs the surgery without first checking the chart, she will begin it immediately; if she first checks the patient's chart, she will begin the surgery in one minute.
  4. The surgeon has credence 1 that were she to check the chart, she would then remove the correct kidney.
  5. If the patient has the correct kidney removed during the operation, then there are the following probabilities that he will live, depending on how soon the surgery begins: (5a) If the surgery begins immediately and the correct kidney is removed, there is a probability of 1 that the patient will live; (5b) If the surgery begins in one minute and the correct kidney is removed, there is a probability of .999 that the patient will live.
  6. If the patient has the wrong kidney removed during the operation, then the probability that he will live is 0.
(This list is quoted directly.) I have two worries. First, Dustin also says of the case that "it's quite plausible that the surgeon knows that the left kidney is diseased", and assumes that she does. But this requires a very substantive epistemological and psychological assumption about the relationship between credence and knowledge. It is not at all innocent to assume that knowledge is consistent with non-maximal credence like this. For lottery-related reasons, Dustin is probably committing himself to the denial of multi-premise closure here. (Indeed, for reasons like the ones Maria Lasonen-Aarnio has emphasized, he may very well commit himself to denying single-premise closure.) That's not a completely crazy thing to end up being committed to, but I think it substantially mitigates the rhetorical force of an argument against me here. Similarly, there are probably good reasons to deny that the surgeon outright believes that the left kidney is diseased under these circumstances, either for conceptual/metaphysical reasons (see e.g. Brian Weatherson's "Can we do without pragmatic encroachment" or Roger Clarke's "Belief is credence one (in context)" or for psychological reasons (e.g. Jennifer Nagel's "Epistemic anxiety and adaptive invariantism"). If any of these views are right, the Dustin is committing to knowledge without outright belief.

My second worry concerns stipulation number 1: this is a surgeon who cares only about the life of the patient. From a realistic point of view, this is a very strange surgeon. According to Dustin's stipulations, the surgeon cares nothing at all about any of the following: whether she follows hospital procedure; whether she sets a good example for the students observing; whether she acts only on propositions that she knows; whether she is proceeding rationally. These strong assumptions are not idle; if we allow that she cares about any of these things, the utility calculus will not require her to go without checking, even when she conditionalizes on the content of her knowledge that the left kidney is diseased. (Suppose she cares about whether she acts only on that which she knows, and that she doesn't know whether she knows; then there is a substantial risk of the negative outcome of acting on something she doesn't know.) But these very strange assumptions will make our intuitions harder to trust. When we try to imagine ourselves in her position, we naturally assume she cares about the ordinary things people might care about. Stipulating that she only cares about one thing—not even mentioning the many other things we have to remember to disregard—makes it very hard to get into her mindset. So I'm inclined to mistrust intuitions about so heavily-stipulated a case.

Tuesday, January 08, 2013

'Knows' contextualism derivative from 'believes' contextualism?

Here's Brian Weatherson:
[B]elief ascriptions and knowledge ascriptions raise at least some similar issues. Consider a kind of contextualism about belief ascriptions, which holds that (L) can be truly uttered in some contexts, but not in others, depending on just what aspects of Lois Lane’s psychology are relevant in the conversation:
(L) Lois Lane believes that Clark Kent is vulnerable to kryptonite.
We could imagine a theorist who says that whether (L) can be uttered truly depends on whether it matters to the conversation that Lois Lane might not recognise Clark Kent when he’s wearing his Superman uniform. And, this theorist might continue, this isn’t because ‘Clark Kent’ is a context-sensitive expression; it is rather because ‘believes’ is context-sensitive. Such a theorist will also, presumably, say that whether (K) can be uttered truly is context-sensitive.
(K) Lois Lane knows that Clark Kent is vulnerable to kryptonite.
And so, our theorist is a kind of contextualist about knowledge ascriptions. But they might agree with approximately none of the motivations for contextualism about knowledge ascriptions put forward by Cohen (1988), DeRose (1995) or Lewis (1996). Rather, they are a contextualist about knowledge ascriptions solely because they are contextualist about belief ascriptions like (L). Call the position I’ve just described doxastic contextualism about knowledge ascriptions. It’s a kind of contextualism all right; it says that (K) is context sensitive, and not merely because of the context-sensitivity of any term in the ‘that’-clause. But it explains the contextualism solely in terms of the contextualism of belief ascriptions.
I think that the kind of view Brian describes is very interesting and worthy of more attention than it's gotten. But I'm puzzled by his characterization of it. How is it that the knows-contextualism is explained by the belief-contextualism here? Belief-contextualism is a view about the word 'believes', which does not typically occur in knowledge attributions, the subject of knows-contextualism. So how could belief-contextualism explain knows-contextualism? In what sense is the theorist in question "a contextualist about knowledge ascriptions solely because they are contextualist about belief ascriptions"?

One might motivate knows-contextualism in a way reliant on belief-contextualism. It might look something like this: belief-contextualism is true. Therefore, for some situation, there are two contexts C1 and C2 such that in C1, 'S believes p' expresses a truth for that situation, and in C2, 'S believes p' expresses a falsehold for that situation. In some such C1, 'S knows p' also expresses a truth for that situation. In all contexts, 'S knows p' expresses a truth only if 'S believes p' expresses a truth. Therefore, in C2, 'S knows p' does not express a truth. Therefore knows-contextualism is true.

That argument gets the job done, but it doesn't seem at all like it must be motivating the combination of views Brian discusses. Why should one run through a metalinguistic principle about the relation between 'knows' and 'believes'? It seems to me to be at least as plausible that the corresponding views about 'believes' and 'knows' are motivated in parallel by considerations about propositional attitude ascriptions in general. I agree with Brian that someone who thinks (L)'s content depends on context for these kinds of reasons is also likely to accept the same for (K). But I don't think there's any reason to think the resulting knows-contextualism is in any sense derivative from the belief-contextualism. It's just that the same kinds of data motivate both views.

Wednesday, October 24, 2012

Sider on epistemic value and nature's joints

Ted Sider thinks that it's epistemically preferable to think in joint-carving terms; this is a way of better matching one's beliefs to the world. While something about that sounds right, I think that some of the the things he says must go too far. He writes, for example, that
[j]oint-carving thought does not have merely instrumental value. It is rather a constitutive aim of the practice of forming beliefs, as constitutive as the more commonly recognized aim of truth. (WtBotW p. 61)
I don't think this can be right. The idea of somebody forming beliefs without any kind of sensitivity or regard for whether they are true is incoherent; this is not so for someone who doesn't care whether her beliefs carve nature at the joints. Suppose one is charged with failing to carve at the joints with her beliefs, and replies flippantly --- so what? --- and maintains her previous beliefs? She might be criticizable on epistemic grounds, but her attitude is comprehensible, even if we do not approve of it. Compare the person who is charged with having false beliefs, and replies in the same way --- indifferently accepting the charge, and continuing to believe as before. This isn't just epistemically vicious; this runs counter to what it is to be a belief. In other words, a truth aim has a better claim to a constitutive connection to belief than a joint-carving aim does.

Here is another difference that should not be overlooked: some instances of non-joint-carving beliefs are absolutely correct to hold. Maybe they're not as good as their joint-carving cousins, but one needn't choose between them. You can believe that the emerald is green and that it is grue. In fact, that's exactly what you should do. And you shouldn't feel at all epistemically deficient for having the latter belief. Compare this to false beliefs: every false belief you have prevents you from having a true one.

Moore-paradoxes show a deep connection between belief and truth; there is a deep incoherence in the idea of accepting: "I believe that p, even though not-p." But there is no corresponding incoherence in "I believe that p, even though the terms in p do not carve at nature's joints."

Whatever epistemic value attaches to joint-carving, it is less central to belief than truth is.

Wednesday, September 29, 2010

Belief and Desire as Commitment

According to a common view, beliefs suffer a coherence constraint that desires do not. If I believe that p, then I'm very unlikely, at the very same time, to believe that not-p -- and if I do, that's a clear rational failing. But desiring various contradictory things is commonplace.

I don't want to dispute that the English statement of the common view just given can often express a truth. But I think it's a mistake to infer from this that there's something interestingly different between the natures of belief and desire. The words 'belief' and 'desire' are used loosely to refer to a few different kinds of things. And the ones that are of most interest in a lot of philosophy, I think, are structurally much more similar than the common view would lead one to think.

Tuesday, September 28, 2010

Particular Beliefs and Belief-Desire Psychology

Let us suppose that Dmitri knows how to sing the "Il balen" cadenza from Verdi's Il Trovatore.

There's a debate about whether Dmitri's knowing how to sing the cadenza amounts to knowing some proposition. According to 'intellectualists', knowing how to X just is (to an approximation), knowing, for some w, that w is a way to X. I don't mean to weigh in on that debate just now, but one of the moves in it is relevant for an issue concerning imagination, which is my current research topic du jour. The move is, on its face, an argument against the thesis that knowing how is knowing that -- it argues that in some cases in which know-how is clearly present, the relevant know-that is not present, because the requisite beliefs are absent or even disbelieved. For example, Dmitri may, consistent with his knowing how to sing the cadenza, have entirely erroneous beliefs about how he does it. Maybe he thinks that he clenches his abdominal muscles, when what he really does is expand his ribcage. He thinks that he opens his mouth widely, but what he actually does is lift his soft palate. The way he sings the cadenza is by expanding his ribcage and lifting his soft palate, but he doesn't know that, because he doesn't believe it. Charles Wallace offers a version of this argument. (This was brought to my attention in a forthcoming paper by my new colleague Ephraim Glick.)

Sunday, September 26, 2010

Inference in Imagination, Belief, and Desire

Shaun Nichols writes:
In addition to a pretense box, Stich and I propose a mechanism that supplies the pretense box with representations that initiate or embellish an episode of pretense, the “Script Elaborator”. This is required to explain the bizarre and creative elements that are evident in much pretend play. However, there are also much more staid and predictable elaborations in pretend play. This too is well illustrated by Leslie’s experiment. Virtually all of the children in his experiment responded the same way when asked to point to the “empty cup”. How are these orderly patterns to be explained? In everyday life when we acquire new beliefs, we routinely draw inferences and update our beliefs. No one knows how this process works, but no one disputes that it does work. There must be some set of mechanisms subserving inference and updating, and we can simply use another functional grouping to collect these mechanisms under the heading “Inference Mechanisms”. Now, to explain the orderly responses of the children in Leslie’s experiment, we propose that the representations in the pretense box are processed by the same inference mechanisms that operate over real beliefs. Of course, to draw these inferences the child must be able to use real world knowledge about the effects of gravity and so forth, and so Stich and I also suppose that the inferences the child makes during pretense can somehow draw on the child’s beliefs.

This is, I think, a fairly typical statement of one important respect in which belief is often said to be similar to imagination: each is subject to the same inference mechanisms. Nichols includes this chart:

nichols-boxology

Notice the 'inference mechanisms' that act on beliefs and imaginings alike.

Now I can see well enough that pretense and belief inferences tend to go in the same way. If I know full well that p only if q, and believe p, I'll often come to infer to a belief that q, just as, if I imagine p, I'll often come to infer to imagine q. (Modulo various familiar complications: sometimes I give up the previous belief, etc.) But doesn't just the same thing happen with desire? If I desire that p, and know full well that p only if q, I'll very often, through a very ordinary sort of means-end reasoning, come to desire that q, modulo various familiar complications like the possibility that I'll stop desiring p.

Take a background situation where I know that nothing funny is going on with the cups; gravity is normal, the water is liquid, etc.

Suppose I believe the cup had water in it and has been turned over. Then I'll believe that the cup is now empty.

Suppose I imagine or pretend that the cup had water in it and has been turned over. Then I'll imagine or pretend that the cup is now empty.

Suppose I desire that the cup had water in it and has now been turned over. Then I'll desire that the cup is now empty.

This suggests to me that the similarities between imagination and belief, in contrast with desire, are exaggerated by, e.g., the diagram above. Those inference mechanisms apply to desires just as well as to beliefs and pretenses. Are there similarities in inference mechanisms that distinguish beliefs and pretenses/imaginings from propositional attitudes more generally?