Saturday, October 24, 2009

Counterfactuals and Knowledge

I'm on the record as thinking there are tight connections between counterfactuals and knowledge.

Robbie Williams, in his "Defending Conditional Excluded Middle," denies this. At least, he argues for a strong disconnect between them. Robbie argues, among other things, that there are strong reasons to accept both (A) and (B):
(A) If I were to flip a fair coin a billion times, it would not land heads a billion times.

(B) If I were to flip a fair coin a billion times, it would not be knowable that it would not land heads a billion times.

Since, Robbie says, (A) and (B) are both true, it can't be that (A) entails the negation of (B) -- therefore Bennett's view, which connects knowledge and counterfactuals in a way implying that entailment, is false. Robbie's argument for (A) is that rejecting it would require rejecting the truth of too many of our ordinary counterfactuals, since they enjoy no stronger metaphysical grounds than those for (A) -- since there's a genuine physical probability of really wacky things happening all the time, we have nothing better than this kind of probabilistic connection between antecedent and consequent in lots of counterfactuals that we want to maintain.

The way Robbie puts the point is that denying (A) would be to commit oneself to an error theory, since it would make our ordinary judgments about ordinary counterfactuals wrong all the time. This move seems to me a bit odd; to my ear, (A) does not look obviously true. Indeed, it looks like we should reject it. That's not to say I can't be moved by an argument in favor of it -- I can -- but if we're in the game of respecting pre-theoretic intuitions, it seems to me that to accept (A) is to embrace something of an error theory, too. We can make it worse if we make the problematic possibility more salient:
(A*) If I were to flip a fair coin a billion times, the possibility of its landing heads a billion times would not be the one to become actual.

If you agree with me that (A*) is equivalent to (A), and that (A*) sounds false, then you must likewise agree with me that Robbie, in embracing (A), commits to a bit of error theory himself. That's not to say it's therefore a bad view; it's just to say that we're already in the game of weighing various intuitive costs. It's not so simple as error theories are bad, therefore (A) must be true.

(Another observation: Robbie thinks it'd be bad to deny (A) because it would make us deny the truth of many ordinary counterfactuals, which play important roles in philosophy. He writes:
Error-theories in general should be avoided where possible, I think; but an error-theory concerning counterfactuals would be especially bad. For counterfactuals are one of the main tools of constructive philosophy: we use them in defining up dispositional properties, epistemic states, causation etc. An error-theory of counterfactuals is no isolated cost: it bleeds throughout philosophy.

Perhaps this is right. But if it is true that counterfactuals play really important roles in construction of philosophical theories, then it's not just their truth that matters -- it's also our knowledge of them. So a view that preserves many of these counterfactuals as true, but that leaves us with very little knowledge about counterfactuals, seems to have a lot of what is problematic in common with the error theory Robbie discusses.)

Robbie gives three arguments for (B). I'll discuss the first two in this blog post; I think that they have analogues against (A).

The first is the one I've just been emphasizing; (B), Robbie says, is intuitive. I agree; but I think it's also intuitive that (A) is false. Robbie thinks intuitions against (A) should be rejected, at the pain of an error theory about counterfactuals, accepting the absurdity that almost all ordinary counterfactuals are false. This seems to me to be very parallel to a standard argument for rejecting intuitions against (B): namely, that it will commit one, by parity of reading, to an error theory about knowledge, accepting the absurdity that almost all ordinary knowledge attributions are false. After all, one might insist, paralleling the discussion above, my epistemic standing vis-a-vis the coin landing tails at least once is no stronger than that with regard to various intuitive pieces of knowledge. This is a point that John Hawthorne makes vividly in Knowledge and Lotteries: the lottery paradox generalizes like crazy.

Robbie's second argument in favor of (B) brings in the lottery-character of (B); (B), he writes, describes "a counterfactual version of a lottery predicament, where consensus has it that agents fail to have the relevant knowledge, supporting the truth of (B)". I agree that this is, prima facie, a reasonably strong case in favor of (B). But I think it has a parallel that is at least as strong against (A). Why is it bad to claim knowledge of lottery propositions? For at least this reason: if I know that my ticket will lose, then, by parity of reasoning, I can know that each other losing ticket will lose. Then, by closure, I can know of wide swaths of tickets that they will all lose; but this is crazy, since when those swaths get wide enough, the probability of them containing a winner conditional on my evidence should be high. (In the special case where I know that one ticket will win, it seems, even more absurdly, that this reasoning should allow me to deduce, from known premises, that that particular ticket will win.) Of course, closure of knowledge can be controversial; one way out of the puzzle is to deny closure, though of course that has costs of its own.

I think this argument for (B) generalizes against (A).

Label a series of fair coins c1 ... cn, and let them be spread through physical space such that each is outside the light cone of every other. The distance is to secure independence of results if we were to start flipping. In fact, however, we're not going to start flipping; in actuality, all the coins remain unflipped. But let's consider some counterfactuals. What would have happened if we'd flipped c1 a billion times? If (A) is true, then we'd better say (A1) is also true:
(A1) If c1 had been flipped a billion times, then c1 would not land heads a billion times.

There's nothing special about c1, of course. Metaphysically speaking c1 is no different from any other ci; so we should say the same of c2, c3, and so on.
(A2) If c2 had been flipped a billion times, then c2 would not land heads a billion times.

(A3) If c3 had been flipped a billion times, then c3 would not land heads a billion times.

(A4) If c4 had been flipped a billion times, then c4 would not land heads a billion times.

...

(An) If cn had been flipped a billion times, then cn would not land heads a billion times.

I'm relying here on the metaphysical symmetricality of each coin; it's not just that our evidence is alike with respect to each, although of course it is -- it's the stronger claim that there is nothing in virtue of which any coin could have a different counterfactual result when flipped than could any other.

I think we're in trouble. I think that we should license, from these premises, this conclusion:
(Ac) If c1-cn had been flipped a billion times each, then all of c1-cn would have landed tails at least once.

Notice that I'm relying on a conjunctive principle for counterfactuals here. You might be worried about this, since of course it's not true in general that one can infer (A&B) > (C&D) from A > C and B > D. However, the counterexamples to this entailment -- at least the standard ones -- involve cases where B interferes with A in bringing about C. (If I went to the party, I'd have fun; if I broke my foot, I'd be in pain, but not: if I went to the party and broke my foot, I'd have fun and be in pain. The foot-breaking interferes with my fun-having.) In the special cases considered, in which A and C are totally independent from from C and D, I don't see why that inference shouldn't go through.

However, (Ac) is implausible, for sufficiently large n. Indeed, for some n that I forget how to calculate, the probability that <if I flipped n coins a billion times each, each would land tails at least once> is equal to .5^1,000,000,000 -- that is, the same probability that was meant obviously to establish the truth of (A). That is, given that (A) must be considered true, the relevant instance of (Ac) ought to be treated false for just the same reason. But (Ac) follows from (A) and other plausible principles. So (A) is not true.

(Side note: somebody remind me how to do that calculation? I dimly remember a time when I was pretty good at math, but I'm way out of practice. We have to use logarithms, right?)

I submit, therefore, that the unacceptability of (Ac) puts considerable pressure against (A) -- in a way exactly parallel to the way that the unacceptability of claiming knowledge that large swaths of lottery tickets contain no winners puts pressure in favor of (B). Maybe my conjunctive principle for counterfactuals will prove controversial; so too is closure for knowledge. (Maybe we don't even need the conjunctive principle; maybe we should find the conjunction of (A1) ... (An) bad enough.) So I don't see anything in Robbie's second argument that motivates treating the knowledge case differently from the counterfactual one.

I'll set aside, for the time being, Robbie's third argument in favor of (B). It's that (B) follows from a sensitivity condition on knowledge. Now the obvious thing to say about that is that pretty much nobody nowadays defends a sensitivity condition on knowledge. But I'm not going to say that, because I'm an exception to the generalization; I do accept a sensitivity condition on knowledge. So I'll have to say something a bit subtler. I'll leave that for another time; I've surely said more than enough for a blog post already.

11 comments:

  1. Hi Jonathan,

    Interesting stuff! I'll have a think about the later discussion, but I wanted to say a couple of things about the initial tension as you describe it.

    Here was my basic thought. Error theories about ordinary counterfactuals, a la Hajek, are awful. They say that our most of our beliefs about ordinary counterfactuals are in error. This has prima facie practical and theoretical costs. We rely on ordinary counterfactual beliefs all the time (e.g. in practical reasoning)--- and, theoretically, appeal to (the truth of) ordinary counterfactuals is really widespread. So both as practical agents and as theorists, we are committed to their truth, and not in just a peripheral way. A theory that says all of this is false is prima facie pretty costly.

    As you note, my case for (A) being true piggybacks on this: it's that ordinary counterfactuals are true, and best extant theories of what makes them true makes (A) true too.

    Now, you say: "to my ear, (A) does not look obviously true. Indeed, it looks like we should reject it. That’s not to say I can’t be moved by an argument in favor of it — I can — but if we’re in the game of respecting pre-theoretic intuitions, it seems to me that to accept (A) is to embrace something of an error theory."

    I don't think I was terribly clear on this in the paper, but I was thinking of an error-theory as someone who takes a whole class of propositions---e.g. ordinary arithmetic as a whole; ordinary talk about macro-objects as a whole; ordinary talk about ordinary counterfactuals as a whole---and says that truth-values are systematically uncorrelated to our beliefs in the area. From that perspective, we're not really in the territory of error-theory when we're fighting about individual statements like (A) and (B). Of course, if we say something surprising (like denying something that most people think is true) then we owe some explanation; that's obvious. But I think error-theories proper are a different kind of beast altogether.

    Maybe we could say that you're thinking about "local" error theories here, and I'm thinking about more "global" ones---and my case for (A) was that it followed from the theory that is our best chance of avoiding a global error theory.

    You mention the "game of respecting pre-theoretic intuitions". You've thought about this stuff much more than me! How much were you meaning to build much into this? Part of my case for the badness of an error-theory centred on the theoretical utility of ordinary counterfactuals; part of it just on the fact that people in fact do tend to believe them, and not just on a whim (cf. stuff about counterfactual explanations of why someone behaved as they did). I don't think we can point to that sort of cluster of "entrenchment" of beliefs for high-falutin' counterfactuals like (A) or (~A) we're discussing in this setting---so I'm not sure I was in the game of matching pre-theoretic intuitions.

    But in any case, if we ask about pre-theoretic intuitions about (A) itself, I'm simply not sure about (A)'s status. I can't believe that people seriously have as strong and stable opinions about it as about the truth of ordinary counterfactuals. I certainly don't! For me it's "spoils to the victor" on this particular case. So my view really is that local considerations for or against (A) don't cut much ice (it's the global stuff that moves me).

    Anyway, so this issue is all very much scene-setting for your post rather than the heart of it, but I thought it was interesting....

    ReplyDelete
  2. Thanks for the comments, Robbie. With regard to the difference between accepting an error theory and rejecting a bunch of intuitions, I'm happy to take on board the distinction you draw in this comment, in which case the tu quoque I offered with regard to error theory is not obviously correct, at least in the terms stated. However, I still have two worries -- one a modified version of the original worry, and one a new worry that arises, given the distinction you've drawn in this comment. The modification of the original worry is this: even if your view doesn't end up counting as embracing an error theory, it will, I worry, share many of the bad-making features of a Hajek-style error theory. You say:
    "Error theories about ordinary counterfactuals, a la Hajek, are awful. They say that our most of our beliefs about ordinary counterfactuals are in error. This has prima facie practical and theoretical costs. We rely on ordinary counterfactual beliefs all the time (e.g. in practical reasoning)— and, theoretically, appeal to (the truth of) ordinary counterfactuals is really widespread."
    If -- and I realize now this isn't something you explicitly discussed, so maybe I'm wrong in thinking you're committed to this? -- we don't know many ordinary counterfactuals, this seems to me almost as bad as thinking they're false. For beliefs that fail to be knowledge are erroneous, too, and relying on non-knowledgable beliefs in practical and theoretical reasoning is a mistake. (This is a bit controversial too, I guess, but there are a good number of us who think it's right.)
    The second worry concerns whether, ultimately, it's best to understand the relevant theorizing as relying on the truth of counterfactuals. Since the error theory is now identified in terms of the actual commitments of our developed theory, I suppose there's some room to question whether the truth of the relevant counterfactuals will ultimately play the role you think it does. (If the error theory were just identified as rejecting things that we thought were true, it'd be obvious that making ordinary counterfactuals false would constitute an error theory. If to be an error theory requires rejecting things that we genuinely need, it doesn't seem as obvious.)

    ReplyDelete
  3. Re the side note:
    Let p be .5^billion, a small probability. For each coin ci,
    Prob(ci is tails at least once) = 1 – p
    Prob(each of c1 to cn is tails at least once) = (1 – p)^n
    Set this equal to p, and solve for n.
    (1 – p)^n = p
    log (1 – p)^n = log p
    n log (1 – p) = log p
    n = log p/log (1 – p)
    This number is a little tricky to calculate numerically, since log(1–p) is so close to zero. We can use an approximation: for small p,
    log(1 – p) ≈ –p
    log p = log (.5^billion) = –(log 2)*billion
    So n ≈ log 2 * billion * 2^billion.
    ...which is pretty big.

    ReplyDelete
  4. On your discussion of (B). I guess I've implicitly already said something about the first case. Basically, I agree with the parallel to lottery cases---they *do* seem like they generalize like mad, prima facie. If there's no decent solution to the lottery paradox in the simple case, we'll have to choose between generalized scepticism and knowledge of lottery propositions. And if you chose the "knowledge of lottery propositions" horn, you should then deny (B).

    One interesting thought you raise is the suggestion that the way I defend (A) ("parity of structure") makes me particularly vulnerable to lottery arguments. I guess this very much turns on the details. My favoured account of the truth-conditions of counterfactuals makes the structure relevantly the same (as does Lewis's). But the reasons I have for making the structure relevant to counterfactual truth-conditions don't really seem to me to put pressure on to regard the structure as epistemically relevant----so I'm not quite seeing how the transfer would go. But it's an intriguing idea.

    On the second case---I'll see if I can come up with something to say about this. Right now, I'll just note that on my favoured view (from the PPR paper), the counterfactual principle A>B, C>D|=AC>BD will fail, even under independence conditions you mention. So that's the point I'd pick to resist your argument (whether that's defensible is another matter).

    The analogue to closure that I do like is counterfactual agglomeration: from a shared counterfactual antecedent, and different consequences, to that antecedent with the conjunctive consequences. Your principle appeals to a kind of "agglomeration in the antecedent" in addition to the "agglomeration in the consequent" that's a standard part of counterfactual logic. I think that's tricky. It feels like you're wanting to appeal to a certain kind of strengthing the antecedent of a counterfactual. Specifically, if A>C is true, then when X is "independent" of A, then A&X>C is true (that, at least, will be sufficient to derive your principle, given standard agglomeration). I'll admit that sounds pretty plausible at first glance! But lots of initially plausible-seeming inference-principles have very nasty consequences when working with conditionals, so I'm a bit wary of this kind of data.

    Incidentally, what do you think of the straight case for Hajekian error-theory based on this principle? (Here's an adaption based on a scenario from a recent Arno-Lasonen/Hawthorne paper). I'm in a room, and do not drop my mug (mug1). If we're not error-theorists, "drop1>fall1" is true. It turns out there are billions of people like me, and by parity, "drop(n)>fall(n)" is true for all n. But conjoining via your principle, If we'd all dropped our mugs, they'd all have fallen. But we can chose the numbers in such a way that even though the chance of ~fall for any one of us is super-low, the chance of one of them not falling, conditional on all being dropped, is super-high, so that the conjoined counterfactual seems implausible. (I've set this up on the supposition that there are actually lots of mug-wielding duplicates of me; but it seems to me here that this is eliminable).

    I think if a principle of counterfactual reasoning threatens error-theory like this, and can be dropped without mutilating the logic of counterfactuals (as I think is the case for the one under discussion) we've got good reason to reject it... WDYT?

    ReplyDelete
  5. Thanks for the detailed comments, Robbie. Here are a few more scattered thoughts.
    I guess I'm pretty pessimistic about finding the needed disanalogy that gives us drop > K(crash) but not billion flips > K(at least one tails). You've stipulated that in the latter case the hypothetical agent has 'purely probabilistic' evidence, and conjectured that in the former there's something else; but I don't see why we shouldn't think that the relevant something else wouldn't apply in the coinflip case, too. I agree that this is pretty much just the lottery paradox that we're engaging with; it sounds like you're holding out for a solution that has some features I'd consider it pretty difficult to come by. Do you have an idea of what you want to do with the lottery paradox? My methodogical preference is to tackle this counterfactual puzzle and the relevant knowledge ones together; the puzzles are so similar, and related, that it'd be surprising to find their ultimate connections disconnected. I give a contextualist treatment of both in the paper linked in my post. The idea is inspired by Lewis's approach to 'knows'; "drop1>fall1" is true in contexts where we're ignoring the wacky possibilities. "billion flips > at least one tails" is not true in its natural context, because the sentence raises to salience the probabilistic nature, thus supplying a context in which the relevant deviant possibility is salient. This generalizes to the lottery paradox in the obvious way.
    My response to the error-theory argument in your comment 5 is just the same as my response to the preface paradox. Suppose I'm in an ordinary context; I can say "drop1>fall1" and express a truth. But if I said "drop all>fall all", that'd express a falsehood, because it'd raise to salience the very likely possibility that at least one of them would go wacky. And in this 'skeptical' context, "drop1>fall1" is false too.
    I totally agree with the second half of your comment three. Maintaining reasonable high credence short of knowledge is better than nothing, but invites some careful consideration of Williamsonian knowledge questions. And yes, the kind of explication move you suggest sounds like a reasonable one at that stage, should that be where the dialectic goes.
    I'm afraid I'm not quite tracking the discussion in the second paragraph of comment 5. Which part of my discussion are you referencing there?

    ReplyDelete
  6. Robbie: another thought. Suppose we deny the agglomeration principle I wanted. Isn't the conjunction of all the coinflip counterfactuals bad enough? You want every coinflip counterfactual to be true. Then you're committed to the truth of the conjunction (C): (flip1>drop1) & (flip2>drop2) & ... & (flipn>dropn).

    Now what attitude ought we to take to (C)? Intuitively, it seems like, for a sufficiently large n, we should be nearly certain that (C) is false. But your view has it that it's true. If your view implies something that we should be nearly certain to be false, isn't that a reason to doubt the view?

    ReplyDelete
  7. @Jonathan8; Yeah, I don't mind that conjunction! (Maybe that makes me a bad person). More constructively for those who dislike it: the least bad response to these situations I recommend to those who find the conjunction counterintuitive, is to give it up in order to gain the benefits of the theory I favour (which is much the same as my attitude to the inference pattern you want to be valid).

    @Jonathan6. Let me think this through. First, on the lottery case, you like a certain kind of contextualist quantifier treatment. Now if we apply that to the relevant cases, we'll get that ideal agent can't know that the result of coin flips won't be all-heads (in the relevant context); but that the ideal agent can know that the mug will fall (in the relevant context). (Feel free to insert quotes or whatever to avoid any use/mention problems in that statement). Marry that with the treatment of the counterfactual that I like, and you get the result I favour, right? So if I believed you on the right way to think about lotteries, but stuck to my guns on conditionals, the situation would be very much like the one I describe... wouldn't it?

    Of course there's the issue about why we don't give a uniform solution to the two issues. I should read your paper on that! But I guess I don't really see any prima facie tension between giving different solutions (it's a bit like what I feel about people like Field who want a uniform solution to the Liar and the Sorites. It's an interesting idea, and worth working out. But the two phenomena are intuitively pretty different, and it wouldn't surprise me, or feel unprincipled, if the ultimate accounts are very different).

    There's a different kind of uniformity characteristic of my favoured account of counterfactuals: if you believe the Lewis analysis from Time's arrow is roughly right for the non-chancy case, mine is (roughly) what you get if you reinterpret what it is for a world to "fit" with the laws in the way we anyway need to to get a Humean best system account theory of chancy laws (this is the Elga stuff). So to me it seems utterly unsurprising that we end up with the sort of closeness conditions for chancy counterfactuals I talk about. And that already solves (at least some of) Hajek's worries, without the need to appeal to context dependence.

    One question that is interesting is whether your theory sustains close enough connections between the relevant class of worlds for counterfactuals and epistemic modals to make the connection I'm discussing in the paper plausible. Would you sign up for (or find attractive) the Bennettian principles I discuss?

    ReplyDelete
  8. Hi Robbie, sorry for the slow response. Here are a few more thoughts.

    In this comment, I'll just develop the suggestion that if you deny my agglomeration principle, the conjunction is bad enough. That's not just because the conjunction is counterintuitive; I think there are theoretical pressures against accepting it.

    You think that it's false that flips1>K(tail1) -- that if we were to flip coin 1 a billion times, one could know coin 1 would land tails at least once. I think it's very hard to think that without also thinking it's false that K(flips1>tail1). Argument: suppose I know the counterfactual. Then suppose I come to learn that the antecedent is true. This, I'd think, shouldn't prevent me from continuing to know the counterfactual. Then, by modus ponens, I could come to know tail1. I'm not sure if you'll agree with me this far. But I think that if flips1>K(tail1) is false, then so must K(flips1>tail1) be.

    So we don't know flip1>tail1. But what should be our attitude to flip1>tail1? If you're a CEM guy, presumably, should think it's very likely, though not certain. (If you're Lewis, you might just think it's false. Or you might play around with quasi-miracles.) And of course, we should take the same attitude towards flip2>tail2, etc.

    What about the conjunction? Your view is that it is true. That's just a view about metaphysics, not about what attitudes anybody ought to have; but of course, in holding a view, you commit yourself to a certain attitude's being appropriate toward it. It's not a good situation to have the view that p and also the view that the rational credence in p is very low. But I worry that this is the view that you may now be committed to. For, plausibly, the rational credence in the conjunction (flip1>tail1 & flip2>tail2 & ... & flipn>tailn) is arbitrarily low for a sufficiently large n. The only extra assumption we need to get this result from what we had already is that the probability of each counterfactual is independent from each other. It's hard to think clearly about these questions, but I think that this is intuitively very plausible -- what this coin would do if flipped does not depend probabilistically on what this other coin would do if flipped.

    (Suppose we tried to deny the needed independence; the only plausible way I can see to do that is to say that we're certain that each counterfactual has the same truth value. This might be plausible in the case where we're certain that none of the coins are going to be flipped -- everything is metaphysically symmetric in those cases. But it is not at all plausible if we're uncertain about whether there will be flippings. If we think some of the coins might be flipped, then we must think there's a nonzero chance that some coins, but not all, will land heads a billion times.)

    More thoughts -- and responses to your questions -- to follow soon.

    ReplyDelete
  9. Robbie, in response to your other questions (and sorry again for the delay!): Yes, my view about lotteries and your view about counterfactuals seem entirely consistent with one another. I am inclined, though, to think they're not a very good fit -- knowledge and counterfactuals seem more closely connected than that. One of my motivations in my project is to make sense of traditional connections between them, e.g., sensitivity. I also find compelling the independent data in favor of context-sensitive counterfactuals.

    Good question, though, whether I want to sign up for Bennett's Hypothesis. I'm not sure, to be honest. I haven't been thinking in those terms. Certainly, I don't see any clear counterexamples. I sort of suspect I might be committed to it by the stories of knowledge and counterfactuals in my PPR paper, although I couldn't quite prove it in the two minutes I just spent with my whiteboard. So call me attracted but uncertain, I guess.

    ReplyDelete
  10. Hi Jonathan,

    My turn to apologise for very slow reply! Just on the first part of 10. There's something similar to the modus ponens argument in Dylan Dodd's Synthese paper objecting to the general Lewisian ideas the PPR paper works within. Here's how I've been thinking of it.

    Consider defenders of strong centering (which includes me, because of CEM+weak centering, and includes Lewis, weirdly---it fits *very* badly with his views on "might" counterfactuals). For these guys, once you're certain of the antecedent of a counterfactual, your credence in the counterfactual is then fixed by your credences in the consequent. Indeed, you'll believe it exactly to the extent you believe the consequent.

    So much is neutral territory. Where views diverge, for strong centering folks, is over the truth values of counterfactuals in non-antecedent worlds. This is where disagreements over the shape of the closeness ordering kick in. In the limit, when I'm certain the antecedent is false, I'm able to be certain that "if fair coin were flipped a billion times, then it wouldn't land all-heads" is true. When I give some credence to the antecedent, those nasty antecedent+not-consequent detract from it somewhat.

    In short, in situations where I don't know the negation of the antecedent, the lottery-like character of the setup comes in *anyway*---because in (epistemically possible) antecedent worlds the truth value of the counterfactual turns on the actual (chancy) outcome. So something like this tempts me: when we *know* the antecedent is false, it's feasible to *know* that the counterfactual is true. When we come to believe the antecedent, clearly we don't know it's negation, and so it's truth at epistemic possibilities turns lottery-like. Whether we can know it becomes lottery-problematic.

    On the other hand, for Bennett's hypothetical knower, the situation is always lottery-like, since ex hypothesi they're in a world where the antecedent obtains, and the question is whether they know the outcome.

    I'm inclined to deny, therefore (at least for the highly probabilistic counterfactuals) that knowledge of these particular counterfactuals survives learning that the antecedent is true. Of course, this is more drawing out the consequences of my theory, rather than motivating them! But I'd be interested to figure out whether you think that's enough, or whether I owe something more illuminating...

    ReplyDelete