Showing posts with label evidence. Show all posts
Showing posts with label evidence. Show all posts

Friday, September 04, 2015

Evidence for presuppositions

One of the things evidential challenges do is to question what grounds we have for our antecedent beliefs. Indeed, that's one of the main things they do. Maybe it's common knowledge that we all believe that p, but we haven't thought much about why we all believe that p. This is just the sort of case where it might make sense to stop and wonder whether p—which is mutually recognized to be something we believe—is actually supported by our evidence.

I think that this observation undermines a claim Michael Blome-Tillmann makes in his book Knowledge & Presuppositions. The key idea there, as in his earlier paper of the same title, is to embrace a Lewisian from of contextualism about 'knows', but where Lewis's 'Rule of Actuality' is replaced by a 'Rule of Presupposition'—that which is consistent with the presuppositions in a conversation is not properly ignored. Ad Michael characterizes conversational presuppositions in terms of dispositions reflective of mutual belief.

Michael considers (p. 99) this dialogue:

A: I know that animal is a zebra.
B: How do you know that it isn't a mule cleverly painted to look like a zebra?
A: Hmm, for all I know it is a painted mule. So I was wrong. I didn't know that it is a zebra after all.

In a strategy unlike the canonical one (supposing that final A's self-attribution of error is mistaken), Michael thinks that A's final response is wholly correct, and that the initial utterance is false. He thinks that the painted mule hypothesis is relevant, even at the start of the conversation. Here's why:
[B] does not pragmatically presuppose that the animals are not cleverly painted mules—neither before nor after asking her first question. Since B asks whether A can rule out that the animal is a cleverly painted mule, B is clearly not disposed to behave, in her use of language, as if she believed it to be common ground that the animal is not a cleverly painted mule. For if she were so disposed, she would certainly not have asked that very question. (100)
This seems to me simply to be wrong, for the reason laid out in my opening paragraph. Asking whether p is ruled out by evidence seems to me wholly consistent with thinking it to be common ground that p.

(Although I think it's independent, this point points to the same conclusion, and has an argumentative similarity with, the one I press against Michael's view in this paper.)

Sunday, July 19, 2015

External Factors and Evidential Symmetry

I'm thinking about the relationship between factive reasons and internalism.

A certain gumball machine has two possible modes. In mode A, it delivers blue gumballs with 90% probability, and red gumballs with 10% probability; in mode B, those proportions are reversed. (The probability for each gumball is independent.) Every morning, a fair coin is flipped to determine in which mode it will remain for the duration of the day. Vibhuti knows all of this. She begins our story with an epistemic probability of .5 for proposition h.
h: the machine is in mode A.
Now two of Vibhuti's friends who have been to the gumball machine today come along. Tunc tells her that he bought a gumball, and it was blue. (This is evidence in favour of h.) Eric tells her that he bought a gumball, and it was red. (This is evidence against h.) Tunc and Eric are equally (and highly) honest and reliable (and Vibhuti knows this). The evidential situation looks entirely symmetric, so Vibhuti's evidential probability for h looks still to be .5.

But certain approaches to evidence might disrupt this apparent symmetry. Suppose it turns out that Eric is lying, but Tunc is telling the truth, and indeed, reporting something he knows. (We've stipulated that this is unlikely, but not that it's impossible.) The lie is skillful, and Vibhuti isn't suspicious; she very reasonably takes both of their assertions at face value. Let's also take on board the following epistemic assumptions (if only to see where they lead):

  1. Testimony almost always puts one in possession of knowledge of the fact that the testimony occurred.
  2. Testimony at least sometimes puts one in possession of knowledge of the fact testified.
  3. E=K.
(Note that I am not assuming a reductivist approach to testimony; there's no claim that the knowledge from 2 typically or ever is based on the knowledge from 1.)

Given these assumptions, it looks like we may not get Vibhuti's case as symmetrical after all. Although she has some evidence in favour of h and some against it, it isn't all symmetrical. For it looks like her relevant evidence is the following:
  • Tunc says he got a red one.
  • Tunc got a red one.
  • Eric says he got a blue one.
The first and third on this list look to be symmetrical for and against h. But the strongest item here counts unambiguously in favour of it. You might think that the second swamps the first in evidential relevance—that sort of seems right—if so, then we could just look at this list:

  • Tunc got a red one.
  • Eric says he got a blue one.
Here we have one piece of evidence in each direction, but the first item, which counts in favour of h, looks stronger than the second. So it looks like there's going to be some pressure against the idea that Vibhuti's evidential probability in h is .5; it seems like it should be higher than .5.

So how, if at all, could E=K (and really, the challenge applies to a broader range of views: anyone strict enough to demand true evidence, but lax enough to allow testimonial contents sometimes to be evidence) accommodate the apparent evidential symmetry in cases like this? I see four options.
  1. Deny that one can ever get the contents of testimony as evidence, because we don't really know the things we're told, even when we're told by people who know. (Skepticism about testimony.) This might be more palatable than it seems if accompanied with some kind of contextualism about both 'knows' and 'evidence'.
  2. Deny that one can ever get the contents of testimony as evidence, because not all knowledge is evidence—maybe only direct or basic knowledge counts as evidence. (E=BK)
  3. Deny that in particular cases like this one can get knowledge via testimony. If one friend is lying to you, then you're in a skeptical situation where testimony is unreliable. (But will this solution be general enough?)
  4. Admit everything I've said about what evidence Vibhuti has, but argue that, for purposes of evidential probability, the situation is symmetrical after all. (The relationship between evidence and evidential probability is complex; I'm really working with something of a 'black box' for the latter—must we suppose that the black box delivers the assymetrical verdict in a case like this?)
Maybe there are more, I'm not sure.

Monday, June 01, 2015

Factoring Views about Having Reasons

I have been thinking about Mark Schroeder’s very interesting paper, “Having Reasons”. He argues against a ‘factoring account’ of having a reason for action, and he also argues that epistemologists have been misled by assuming a parallel factoring account of evidence.

I have three reactions.

  1. Schroeder is unclear about what exactly the commitments of the factoring account are; I think he may slide between a stronger and a weaker reading of it. This isn’t disastrous for his own project, because he wants to reject both readings, but I think it’s important to keep them separate (in part because of (2) below).
  2. The stronger reading is pretty plausibly false (though maybe not just for the reasons Shroeder says) but the weaker reading is pretty plausibly true (despite his arguments).
  3. Epistemologists have not been misled by assuming (a strong form of) the factoring account.
I’ll try to defend (1) in this post.

What is the factoring account? Schroeder first introduces it via an analogy:
When someone has a ticket to the opera, that is because there is a ticket to the opera, and it is in her possession—she has it. Similarly, if one has a golf partner, this can only be because there is someone who is a golf partner, and one has him. But here, it is not like there are people out there who have the property of being golf partners, and one is in your possession. Rather, being a golf partner is simply a relational property, and the golf partner you have—your golf partner—is simply the one who stands in the golf partner of relation to you. 
A factoring account of having opera tickets is true. There is an opera ticket, and moreover, one has it. A factoring account of having golf partners, however, is to be rejected. What exactly is wrong with this view? Schroeder says it’s a commitment to the implausible claim that “there are people out there who have the property of being golf partners, and one is in your possession.” But of course, strictly speaking, there are people out there who are golf partners, and one of them is mine. I agree with Schroeder that there’s an important contrast between these cases, but I don’t think he’s quite articulated what it is. I think it has to do with grounding. What makes it the case that I have an opera ticket is the existence of this thing the opera ticket, combined with me standing in a suitable relationship to it. But the existence of the golf partner, combined with my relationship to her, doesn’t make it the case that I have a golf partner. On the contrary, it is my having her as a golf partner that makes it the case that she is a golf partner. The relationship, not the object, is relatively fundamental here; the existence of the golf partner—though genuine—is derivative.

So distinguish these claims:

  • Weak Factoring: Any time S has R as a reason, there exists a reason R, and S stands in a suitable having relation to R.
  • Strong Factoring: What it is for S to have R as a reason is for there to exist a reason R, and for S to stand in a suitable having relation to R.
As the names imply, Strong Factoring implies Weak Factoring, but not vice versa. If what I said about golf partners is correct, Weak Factoring does not get at the intuitive contrast between opera tickets and golf partners. The analogue of Weak Factoring is true of golf partners. (Contra the letter of Schroeder's text, any time one has a golf partner, there really is someone who is a a golf partner that one has.) I don’t think Schroeder is at all clear about this; he writes at times as if ‘the Factoring Account’ is just Weak Factoring. (i.e., “[T]he Factoring Account has two major commitments. In any case in which it seems that there is a reasons someone has to do something, whatever is the reason that she has must be just that: (1) a reason for her to do it, and (2) one that she has.” p. 58)

The distinction makes an important difference when it comes to thinking about the views one might have about reasons. For example, here is a possible view one might have about reasons: R=K. (A proposition is among a subject’s reasons if and only if the subject knows that proposition.) This view counts as a Weak Factoring view—any time you have knowledge, there is some knowledge, and moreover, you have it. But it is not a Strong Factoring view; the extinct of the knowledge ontologically depends on your having the knowledge. It is more like golf partners than opera tickets.

“Weak Factoring” is probably a misnomer, really—the view in question isn’t a kind of factoring at all. It’s a mere entailment claim. So when Schroeder’s argument against what he calls ‘The Factoring View’ takes the form of counterexamples to Weak Factoring, he’s really making a much more radical claim than anything we should call the rejection of a factoring treatment of having reasons. He's rejecting the mere entailment from having a reason to there being a reason.

(His counterexamples are cases where a subject acts on a reasonable but mistaken belief—like Bernard Williams’s subject who takes a sip of the liquid in his glass because he falsely believes it’s a martini. I don’t think these are counterexamples, for reasons I won’t go into right now.)

Saturday, October 05, 2013

Jessica Brown on evidence and luminosity

In "Thought Experiments, Intuitions, and Philosophical Evidence," Jessica Brown introduces a problem for "evidence neutrality" deriving from Williamson's anti-luminosity arguments: evidence neutrality implies that if S has E as evidence, it is always possible for S's community to know that E is evidence, which entails the false claim that evidence is luminous. Sounds ok. Then she writes this puzzling passage:
We might wonder whether we could overcome this first problem by weakening the content element of evidence neutrality. Instead of claiming that if p is part of a subject’s evidence, then her community can agree that p is evidence, the relevant condition could be weakened to the claim that her community can agree that p is true. Although this revised version of the evidence-neutrality principle avoids Williamson’s objection that one is not always in a position to know what one’s evidence is, it faces an objection from Williamson’s anti-luminosity argument. Williamson claims to have established that no nontrivial condition is luminous, where a condition is luminous if and only if for every case a, if in a C obtains, then in a one is in a position to know that C obtains (2000, 95). There is not space here to assess the success of Williamson’s anti-luminosity argument. However, assuming that it is successful, it seems that no mere tinkering with the content element of evidence neutrality will suffice to defend it.
I'm just not seeing the problem here. The proposal we're considering is this: any time S has E as evidence, S (and/or S's community) is in a position to know that E is true. But this does not imply that any non-trivial condition is luminous. The claim that evidence is luminous would need knowledge that E is evidence on the right-hand side; the claim that truth is luminous would need no restriction to evidence on the left-hand side. Saying that evidence requires being a position to know truth looks wholly consistent with Williamson's luminosity argument. Indeed, setting aside the role of the community -- which as far as I can tell is idle in the argument Brown is considering -- it follows trivially from Williamson's own view, E=K. Notice that S's knowing that p entails that S is in a position to know that p is true; this is no violation of anti-luminosity.
Anybody see what I'm missing?

Monday, October 15, 2012

Acting on knowledge under uncertainty

Susan, who is entertaining later this evening, is about to walk to the local market to buy some olives for cocktails. She now faces that famous perennial question whether to take her umbrella. We stipulate:

  • Susan does not know whether it will rain during her walk.
  • Susan's rational credence that it will rain during her walk is 0.4.
  • The nuisance of carrying the umbrella on her walk will cost Susan 10 utils.
  • The nuisance of being rained upon without an umbrella is -30 utils. (It is no nuisance at all to be rained on if she has her umbrella.)
It's pretty reasonable to suppose that in this case, Susan ought to take the umbrella; we calculate the expected value pretty straightforwardly. The umbrella costs 10, and gives her a 0.4 chance of saving 30. (30 * 0.4) - 10 = +2. If Susan takes the umbrella in a way sensitive to the positive expected value of doing so, there's a pretty strong intuition to the effect that she's done everything right.

As you know, cases like this one are sometimes thought to be problematic for the knowledge norm of practical reasoning. Alex Jackson's careful paper does a nice job separating different knowledge-norm-like commitments, but he identifies this kind of case as at least a prima facie challenge to the following determination thesis:
What one knows determines what it is rational for one to do (possibly in concert with one’s desires).
I like this determination thesis. (In fact, I like something stronger -- I think we should be after something more like metaphysical grounding, not just determination.) So I need a story about the case of Susan. The Hawthorne & Stanley story is that Susan, if she is acting appropriately, is acting on knowledge about epistemic probabilities. She says to herself: there is a 0.4 chance that it will rain; this thing that she says to herself is what she treats as a reason. And it is something she knows.

I'm pretty uneasy about this line, although I've never been able to put my finger on exactly what I don't like about it. There is something odd, it seems to me, about probabilistic contents playing these kinds of roles. I know that's not an objection; I'm just recording my uneasiness. Here, anyway, is an objection: suppose she doesn't know the relevant probabilistic claim. Suppose that, for all she knows, the chance that it will rain is 0.3.

Remember, this is evidential probability we're talking about; the difference between the chance's being 0.3 and its being 0.4 can't be made by meteorological facts wholly outside Susan's ken. Still, it's not at all implausible that Susan might not know, with that level of precision, whether her evidence probabilifies rain to degree 0.3 or 0.4. Indeed, my own evidence right now, it seems to me, puts the chances of its raining on me as I walk to work tomorrow right around that ballpark; but I have no idea whether it is closer to 0.3 or to 0.4. I hope you agree this is not a very implausible possible situation.

Notice also that if the probability really is only 0.3, then, given the stipulations above, Susan's expected value for taking the umbrella is negative. ((30 * 0.3) - 10 = -1) So under the current stipulations, for all Susan knows, taking the umbrella might have negative expected value. She doesn't know that she should take it. She does, we may allow, know that there is some chance of rain, but this doesn't look like a good enough reason to perform this action.

You might think about trying to gild the bitter pill at this stage, suggesting that if she doesn't know whether it's better to take it, then she really does violate the action norm in taking it, although she does so in an excusable way. This seems to be Hawthorne & Stanley's line. But I don't think we should take it. For it's consistent with our case here that Sarah is exceptionally well-attuned to the evidence. That is, if the chance really were only 0.3, then she wouldn't take the umbrella. This is of course totally consistent with her failure of introspective discrimination.

I think Hawthorne & Stanley were right to look to knowledge with contents other than that it will rain, but wrong to focus in on probabilistic ones, in part for the reason just offered: there's no particular reason to expect them to be known. (And also in part because of that feeling I haven't managed yet to articulate, that these things aren't the right kinds of things to be invoking in one's reasoning.) While there's no reason, it seems to me, to think that Sarah must know the probabilistic content, there is, it seems to me, good reason to think she must have some other relevant knowledge around. If, for example, you think that E=K, then the evidential probability must be probability conditional on some knowledge. Let that knowledge be the reason for action. What is it that is the relevant evidence? I don't know, it depends on how the case is filled out. Maybe something a forecaster said? Maybe the look of the clouds? Whatever it is, I say we understand Susan as acting on the basis of that evidence.

Can you get a case like this involving no such evidence? Alex Jackson tries to give us one. But this blog post is getting long and I'm getting hungry, so maybe I'll leave discussion of that for another day. 

Wednesday, February 08, 2012

E = K as foundationalism?

I'm re-reading Timothy Williamson's Knowledge and Its Limits for a reading group at UBC. I'm struck by this passage, from the introduction to Chapter 9 on Evidence.
[W]e may speculate that standard accounts of justification have failed to deal convincingly with the traditional problem the regress of justifications—what justifies the justifiers?—because they have forbidden themselves to use the concept knowledge. E = K suggests a very modest kind of foundationalism, on which all one's knowledge serves as the foundation for all one's justified beliefs.

I'm not at all sure what to make of this. I'm very impressed by E = K, but I have a hard time seeing reason to accept either of these claims:

  1. E=K is a kind of foundationalism

  2. E=K provides a solution to the traditional problem of the regress


Here's the story about foundationalism and the regress that I tell to my undergrads. I think it's pretty standard; if its somehow idiosyncratic, I hope someone will tell me. Everybody thinks that the justification for some beliefs depends on other justified beliefs. How do those other beliefs get justified? Maybe by yet further justified beliefs. Foundationalism is the thesis that there are basically justified beliefs -- beliefs that are justified in some other way than by being supported by other justified beliefs. If you're not a foundationalist, then you think that all justified beliefs are justified by other justified beliefs; for any given justified belief, there must be a chain of justified beliefs in successive support relationships that never ends, either because it continues infinitely, or because it doubles back on itself. Insofar as these latter two options are implausible forms of regress, there is intuitive support for foundationalism.

So as I understand it, what it is to be a foundationalist is to think that there are basic beliefs — i.e., beliefs that are justified, not in virtue of being supported by other justified beliefs. I'm surprised to see Williamson suggest that his view is a foundationalist one; E = K appears to me to be neutral on the question of whether there are basic beliefs. The Knowledge First project is consistent with the traditional idea that knowledge entails justified belief; I don't think it's a stretch to say that, on Williamson's view, knowledge is a (special, metaphysically privileged) kind of justified belief.

So if one's knowledge is among one's justified beliefs, then read literally, the claim that "[all of] one's knowledge serves as the basis of all one's justified beliefs" is tantamount to the claim that the chains of justification of the sort foundationalists talk about are in fact circular: some of my justified beliefs—the knowledgable ones, at least—are supported by chains that include themselves. But this is anathema to foundationalism, as the label for that view makes vivid.

Maybe I'm reading uncharitably literally; the thesis is that the knowledge is basic, and it supports the mere justified beliefs. All the knowledge is at the bottom of the pyramid and nowhere else. This now looks like foundationalism, but it carries the commitment that all knowledge is basic: all knowledgable beliefs are justified, not in virtue of being supported by other justified beliefs. This is a stronger claim than any I'd thought Williamson was committed to; I'm not sure it's particularly plausible. There is such a thing as inferential knowledge; in such cases, it seems very intuitive that justification depends on justification of the beliefs from which it's inferred. If you're a knowledge first program, you shouldn't think that's the main thing or the fundamental thing or the most interesting thing going on -- knowledge first people should be more excited about the fact that the knowledge of the conclusion flows from the knowledge of the premise -- but I see no reason to deny that there's also justificatory dependence at a less fundamental level. But foundationalism is (I thought) precisely about justificatory independence.

So what's going on? Does Williamson intend a weaker sense of 'foundationalism'? Or am I wrong about what the traditional sense would require, given his comments? Or is Williamson really committed to the thesis that if S knows that p, then S's justification for p does not depend on S's justification for any other proposition?