For reasons exactly like the ones outlined in the previous post, these two claims are importantly distinct:
(1) If S knows p, then S can appropriately rely on p in practical reasoning.
(2) If S knows p, then p is warranted enough to justify S in phi-ing, for any phi.
I argued a couple of weeks ago that (1) is not strong enough to establish pragmatic encroachment. I suggested then that this was a problem for Fantl and McGrath; the discussion with Jeremy in the comments thread is part of what helped me to see the distinction between (1) and (2) more clearly. If their argument proceeded from (2), rather than (1), my argument doesn't apply. However, in light of the important distinction between these two claims linking knowledge and action -- I won't speak for anybody else, but this is a distinction I certainly hadn't been thinking clearly about before recently -- we should, if relying on claims like (2), proceed carefully, distinguishing arguments for (2) from arguments for (1).
I gave in my most recent post, linked at the top, an argument against a strictly weaker principle than (2). If that argument was right, then (2) is false. Let phi be an action that, for accidental reasons concerning the background environment, is only justified if S has some super-knowledge access to p. The example from that post was, let phi be the assertion that p, and let the background be such that S has promised, in a morally weighty way, not to assert p unless S knows that she knows that she is absolutely certain that p; let p be known, but the higher condition not be met. Then S is not justified in phi-ing, under the circumstances, even though she knows p, and even though, were p better warranted, she would be.
But maybe that argument went a little too quick. For maybe it's question-begging to assume, as I did, that one can know p without being in the super-epistemic position, under the circumstances described. Maybe the act of promising collapses that distinction. If so, then my argument against (2) can be resisted. Indeed, it's sort of the point of (2) that the 'standards' for knowledge raise to as high a level as one might need in any given circumstance.
But -- and here's the main point of this post -- one can retain (1) without collapsing that distinction. That's another respect in which (1) is interestingly different from (2).
I like that promising counterexample. It makes much more plausible a worry Matt and I had about a somewhat odd person who just happens to like reasoning from p only if p has probability 1 - not because of risk aversion, but just because she has a brute preference for propositions with that probability. (Compare to a person who just likes reasoning from propositions with probability of exactly .998.). Your example is better. But I don't think it's ultimately a counterexample to 2, and not because you don't know in that case. It's because p is still warranted enough to justify. What stands in the way is not your epistemic weakness because raising the probability to 1 while holding fixed all other factors relevant to determing whether you're justified (including whether you are in violation of any moral norms) doesn't change whether you're justified. Consider: you might make an (admittedly odd) promise not to reason from any proposition unless it has a probability no higher than .98. Suppose p is certain for you. This principle looks good: if p is certain for you, then p has the degree of certainty required to justify. But it looks like you now have a counterexample to this principle. Were p less certain, it would justify. That's odd. And the way to save the principle is to point out that lowering the degree of certainty while holding everything else relevant fixed leaves it the case that p fails to justify.
Interesting suggestion, Jeremy.ReplyDelete
I guess I tend to get nervous about just how this 'holding everything else fixed' move will work in practice. Why can't we make the same move in attack of some of the verdicts you want? I'm standing in front of the frozen lake, with good but not great evidence that p: it can hold my weight. You want to say that p isn't warranted enough to justify me in acting. But consider the counterfactual: were the probability of p raised, holding everything else fixed -- including whether I'd be in violation of any prudential norms in crossing the ice -- won't justify me in crossing.
I certainly see the concern. But I don't see that "It would behoove me to act on p" constitutes an extra reason to phi over and above the reason constituted by p itself. If p is a reason to phi, then you don't get a boost in your reasons to phi by including among your reasons that acting on p is prudential. I don't feel the same about "It will violate a promise to act on p" or "It is immoral to act on p." Those are countervailing reasons relevant to whether p justifies. So, they can be held fixed while varying the probability of.ReplyDelete
How would the reasoning go in your case? Presumably something like this: "Well, p is true. So that's a reason to phi. But I promised not to phi if I have the degree of support that I do. So I'd break the promise if I phi'd. Though p is a good reason to phi, it's defeated by the fact that I'd break a promise if I phi'd." Here the defeat is provided by the fact that I'd break a promise if I phi'd. But that defeat would be provided even if p were certain. Promises beat out p when it comes to reasons to phi. And the way to figure that out is to raise p to 1 and keep the fact that phi-ing would break a promise constant and see what happens.