First, some background -- both to clue in any readers who are interested in reading but don't know the Dretske, and so that I can make sure I have his framework clear in my own head.
Dretske's central idea is to use information theory to elucidate knowledge. The first few chapters of his book are devoted to articulating the relevant ideas about information. At least for the purpose of argument, I want to take all of that on board for now. Here are some central ideas:
Suppose we've adopted a convention whose purpose is to tell me whether John is happy. You'll send me a text message comprising either 'YES' or 'NO'; it is understood that you will definitely send me YES if John is happy, and NO if he isn't, and you (or anyone else) won't send me any other message than the one. In fact, John is happy, so you send me a message, and the word 'YES' appears on my screen.
In Dretske's terminology, my screen carries the information that John is happy, and it does so by virtue of having the word 'YES' on it. The idea is that only if John were happy would my screen say 'YES'; there's no other way for this to be so. There is, under the appropriate background assumptions, no possible way for my screen to be the way it is, without John being happy, so my screen carries the information that John is happy.
Dretske characterizes (at least a certain kind of) knowledge in terms of information:
K knows that s is F = K's belief that s is F is caused (or causally sustained) by the information that s is F.
He clarifies what it is for information to cause a belief:
When ... a signal carries the information that s is F in virtue of having property F', when it is the signal's being F' that carries the information, then (and only then) will we say that the information that s is F causes whatever the signal's being F' causes.
So for me to know that John is happy, I need my belief to be caused (or sustained) by my screen's saying 'YES', and for my screen's saying 'YES' to carry the information that John is happy.
This is prima facie a very strong condition on knowledge. For my belief to be knowledge, it must be based on a signal that nomologically guarantees the content of my belief. If, for instance, there is any possibility that you will confuse the signals, texting 'YES' to try to tell me that John is not happy, or that you will lie to me, or that you yourself will misapprehend John's emotional state, or that the phone company will send me a 'YES' when you sent a 'NO', then my screen does not carry the information that John is happy, so I can't know that he's happy.
It's easy to see that this kind of picture runs into the risk of serious skeptical consequences. Lots and lots of our putative knowledge looks like it's caused by signals that do not contain the information in question, in Dretske's sense. Take testimony, for instance -- suppose that you believe that Dretske's book was published in 1981 because I said it was, and I said it was because I believed it. There's no perfect relationship between my believing it was published in 1981 and its being published in 1981; there are in some intuitive sense possibilities where I have a wrong belief, and pass it on via my blog post. So my believing that it was 1981 doesn't carry the information that it was 1981. (This is so even if my belief amounts to knowledge, because my belief may in fact have been causes by a signal that carries the information, even though there are other possible ways it could have been caused.) So you can't get knowledge via my testimony.
Dretske's own strategy for heading off these skeptical consequences, I understand, is to develop his 'relevant alternatives' approach to knowledge and information, according to which it is in some sense not a (genuine) possibility that I be wrong, if the circumstances are in fact such that I was right. I haven't read that part of the book yet, so I'll hold off on discussing that material. The idea I wanted to consider in this blog post -- wow, that was a lot of exposition! -- is a different strategy.
My believing that p doesn't carry the information that p. But I have more properties than believing that p; might some of them carry the information that p, and causally produce or sustain your belief? Suppose, for instance, that I know that p. If I know that p, and tell you that p, and you believe me, we might say this: your belief was caused by my knowing that p. And my knowing that p (unlike my merely believing that p) carries the information that p. This is so, by Dretske's lights, even absent any moves about relevant alternatives -- there's no possible case at all, not even an 'irrelevant' one, where I know that p but p is false. Knowledge looks like an excellent signal for the transmission of information.
When I mentioned this idea in the reading group, it was met with a fair amount of resistance, but no one was able to give a very clear statement of what was wrong with it. One potential worry concerned whether it was plausible that my knowing could plausibly be the relevantly causally efficacious state; wouldn't my merely believing have had the same affect on you? Maybe so, but that doesn't mean it wasn't my knowledge that had the effect in actuality. Dretske endorses a kind of a counterfactual approach to causation; I think my knowledge passes the test in this case: if I hadn't known that p, I wouldn't have asserted that p, and so you wouldn't have believed that p. So your belief is caused by my knowledge.
Am I missing an obvious problem with this strategy? It looks like it'd be pretty helpful for someone attracted to Dretske's approach.