Robbie Williams, in his "Defending Conditional Excluded Middle," denies this. At least, he argues for a strong disconnect between them. Robbie argues, among other things, that there are strong reasons to accept both (A) and (B):
(A) If I were to flip a fair coin a billion times, it would not land heads a billion times.
(B) If I were to flip a fair coin a billion times, it would not be knowable that it would not land heads a billion times.
Since, Robbie says, (A) and (B) are both true, it can't be that (A) entails the negation of (B) -- therefore Bennett's view, which connects knowledge and counterfactuals in a way implying that entailment, is false. Robbie's argument for (A) is that rejecting it would require rejecting the truth of too many of our ordinary counterfactuals, since they enjoy no stronger metaphysical grounds than those for (A) -- since there's a genuine physical probability of really wacky things happening all the time, we have nothing better than this kind of probabilistic connection between antecedent and consequent in lots of counterfactuals that we want to maintain.
The way Robbie puts the point is that denying (A) would be to commit oneself to an error theory, since it would make our ordinary judgments about ordinary counterfactuals wrong all the time. This move seems to me a bit odd; to my ear, (A) does not look obviously true. Indeed, it looks like we should reject it. That's not to say I can't be moved by an argument in favor of it -- I can -- but if we're in the game of respecting pre-theoretic intuitions, it seems to me that to accept (A) is to embrace something of an error theory, too. We can make it worse if we make the problematic possibility more salient:
(A*) If I were to flip a fair coin a billion times, the possibility of its landing heads a billion times would not be the one to become actual.
If you agree with me that (A*) is equivalent to (A), and that (A*) sounds false, then you must likewise agree with me that Robbie, in embracing (A), commits to a bit of error theory himself. That's not to say it's therefore a bad view; it's just to say that we're already in the game of weighing various intuitive costs. It's not so simple as error theories are bad, therefore (A) must be true.
(Another observation: Robbie thinks it'd be bad to deny (A) because it would make us deny the truth of many ordinary counterfactuals, which play important roles in philosophy. He writes:
Error-theories in general should be avoided where possible, I think; but an error-theory concerning counterfactuals would be especially bad. For counterfactuals are one of the main tools of constructive philosophy: we use them in defining up dispositional properties, epistemic states, causation etc. An error-theory of counterfactuals is no isolated cost: it bleeds throughout philosophy.
Perhaps this is right. But if it is true that counterfactuals play really important roles in construction of philosophical theories, then it's not just their truth that matters -- it's also our knowledge of them. So a view that preserves many of these counterfactuals as true, but that leaves us with very little knowledge about counterfactuals, seems to have a lot of what is problematic in common with the error theory Robbie discusses.)
Robbie gives three arguments for (B). I'll discuss the first two in this blog post; I think that they have analogues against (A).
The first is the one I've just been emphasizing; (B), Robbie says, is intuitive. I agree; but I think it's also intuitive that (A) is false. Robbie thinks intuitions against (A) should be rejected, at the pain of an error theory about counterfactuals, accepting the absurdity that almost all ordinary counterfactuals are false. This seems to me to be very parallel to a standard argument for rejecting intuitions against (B): namely, that it will commit one, by parity of reading, to an error theory about knowledge, accepting the absurdity that almost all ordinary knowledge attributions are false. After all, one might insist, paralleling the discussion above, my epistemic standing vis-a-vis the coin landing tails at least once is no stronger than that with regard to various intuitive pieces of knowledge. This is a point that John Hawthorne makes vividly in Knowledge and Lotteries: the lottery paradox generalizes like crazy.
Robbie's second argument in favor of (B) brings in the lottery-character of (B); (B), he writes, describes "a counterfactual version of a lottery predicament, where consensus has it that agents fail to have the relevant knowledge, supporting the truth of (B)". I agree that this is, prima facie, a reasonably strong case in favor of (B). But I think it has a parallel that is at least as strong against (A). Why is it bad to claim knowledge of lottery propositions? For at least this reason: if I know that my ticket will lose, then, by parity of reasoning, I can know that each other losing ticket will lose. Then, by closure, I can know of wide swaths of tickets that they will all lose; but this is crazy, since when those swaths get wide enough, the probability of them containing a winner conditional on my evidence should be high. (In the special case where I know that one ticket will win, it seems, even more absurdly, that this reasoning should allow me to deduce, from known premises, that that particular ticket will win.) Of course, closure of knowledge can be controversial; one way out of the puzzle is to deny closure, though of course that has costs of its own.
I think this argument for (B) generalizes against (A).
Label a series of fair coins c1 ... cn, and let them be spread through physical space such that each is outside the light cone of every other. The distance is to secure independence of results if we were to start flipping. In fact, however, we're not going to start flipping; in actuality, all the coins remain unflipped. But let's consider some counterfactuals. What would have happened if we'd flipped c1 a billion times? If (A) is true, then we'd better say (A1) is also true:
(A1) If c1 had been flipped a billion times, then c1 would not land heads a billion times.
There's nothing special about c1, of course. Metaphysically speaking c1 is no different from any other ci; so we should say the same of c2, c3, and so on.
(A2) If c2 had been flipped a billion times, then c2 would not land heads a billion times.
(A3) If c3 had been flipped a billion times, then c3 would not land heads a billion times.
(A4) If c4 had been flipped a billion times, then c4 would not land heads a billion times.
(An) If cn had been flipped a billion times, then cn would not land heads a billion times.
I'm relying here on the metaphysical symmetricality of each coin; it's not just that our evidence is alike with respect to each, although of course it is -- it's the stronger claim that there is nothing in virtue of which any coin could have a different counterfactual result when flipped than could any other.
I think we're in trouble. I think that we should license, from these premises, this conclusion:
(Ac) If c1-cn had been flipped a billion times each, then all of c1-cn would have landed tails at least once.
Notice that I'm relying on a conjunctive principle for counterfactuals here. You might be worried about this, since of course it's not true in general that one can infer (A&B) > (C&D) from A > C and B > D. However, the counterexamples to this entailment -- at least the standard ones -- involve cases where B interferes with A in bringing about C. (If I went to the party, I'd have fun; if I broke my foot, I'd be in pain, but not: if I went to the party and broke my foot, I'd have fun and be in pain. The foot-breaking interferes with my fun-having.) In the special cases considered, in which A and C are totally independent from from C and D, I don't see why that inference shouldn't go through.
However, (Ac) is implausible, for sufficiently large n. Indeed, for some n that I forget how to calculate, the probability that <if I flipped n coins a billion times each, each would land tails at least once> is equal to .5^1,000,000,000 -- that is, the same probability that was meant obviously to establish the truth of (A). That is, given that (A) must be considered true, the relevant instance of (Ac) ought to be treated false for just the same reason. But (Ac) follows from (A) and other plausible principles. So (A) is not true.
(Side note: somebody remind me how to do that calculation? I dimly remember a time when I was pretty good at math, but I'm way out of practice. We have to use logarithms, right?)
I submit, therefore, that the unacceptability of (Ac) puts considerable pressure against (A) -- in a way exactly parallel to the way that the unacceptability of claiming knowledge that large swaths of lottery tickets contain no winners puts pressure in favor of (B). Maybe my conjunctive principle for counterfactuals will prove controversial; so too is closure for knowledge. (Maybe we don't even need the conjunctive principle; maybe we should find the conjunction of (A1) ... (An) bad enough.) So I don't see anything in Robbie's second argument that motivates treating the knowledge case differently from the counterfactual one.
I'll set aside, for the time being, Robbie's third argument in favor of (B). It's that (B) follows from a sensitivity condition on knowledge. Now the obvious thing to say about that is that pretty much nobody nowadays defends a sensitivity condition on knowledge. But I'm not going to say that, because I'm an exception to the generalization; I do accept a sensitivity condition on knowledge. So I'll have to say something a bit subtler. I'll leave that for another time; I've surely said more than enough for a blog post already.