Monday, January 30, 2017

Labeling lies and knowing minds

I don't have super strong feelings about whether news outlets should use the word 'lies' to describe Donald Trump's lies. As long are they're super clear about how he's saying that p even though p is false, that seems to me to be the important thing. The controversy over whether to use the 'L-word' doesn't really interest me all that much.

That said, I did find it pretty interesting to read NPR's description of why they don't call Trump's lies 'lies'. The basic thought is this: in order for something to be a lie, it has to be said with an intent to deceive. So in calling something a lie, one is in part making a claim about the intentions behind it. As NPR's Mary Louise Kelly puts it: "without the ability to peer into Donald Trump's head, I can't tell you what his intent was. I can tell you what he said and how that squares, or doesn't, with facts."

This is an epistemological claim—a skeptical one. It's often tempting to say that you can never really know what someone is thinking, because all you really have to go on is how they behave. But skeptical temptations are funny things, and there's probably good reason to resist a lot of them a lot of the time. For example, notice that it's also tempting to say that you can never really know anything about the future, since it hasn't happened yet, or that you can never really know historical facts, since you weren't personally there. At an extreme, Descartes famously argued that you can never really know anything about the external world, since you might be the victim of an evil demon who is manipulating your senses in a way that doesn't correspond to reality.

These skeptical arguments do carry some intuitive force, but most of us—us epistemologists, and us people who do things in the world—are committed to their being wrong. You and I know lots of things about the external world. I know that my dog just left this room, for instance. We know many things we didn't see for ourselves, but instead rely on others to inform us about. For example, I know that Donald Trump fired Sally Yates today, even though I wasn't there. (I read about it via news websites.) We know many things about the future. For example, I know that I will give a lecture on rationalism in the morning. Do you know how you'll get to work tomorrow, or when you'll next see your best friend? I am confident that many readers do.

We also know many things about others' minds. I know that my dog noticed that squirrel—this is manifest from her behaviour. (If you ask me, "did she notice the squirrel?" I will say "yes"; I won't say "there's no way to tell without seeing into her soul".) I know, of some of my friends, that they are terrified by the Trump administration. I know about some people's romantic feelings towards other people. When I watch someone at a sports bar, I often know which team they want to win. When I watch someone struggling with their arms full of groceries fumbling around with their keys, I know what they're trying to do.

There are certainly interesting questions about how we're able to tell what people are thinking and feeling and trying to do, but there's nothing inherently mysterious or spooky about the idea. ("Mind-reading" is an active and lively area of study in psychology and philosophy of mind.) One of the traits of autism is a kind of difficulty in knowing others' minds—conversely, the ability to know others' minds is neurotypical. To use Kelly's term, we really do, in an important sense, have "the ability to peer into someone's head".

To be sure, I can't always tell what someone is thinking. Sometimes they're not giving the kind of outward signs it would require for me to tell. Sometimes I even go wrong, misattributing a mental state to someone. A con artist might deceive me about what they're trying to do, for instance. But this kind of possibility of ignorance or error does not mean we cannot often have knowledge of people's thoughts and feelings. (After all, it's possible to go wrong with our perception, too.)

So I don't think we should take our reluctance to ascribe knowledge to people's inner lives very seriously. If NPR doesn't want to say Trump is lying because it would be unhelpfully inflammatory, I have no problem with that decision. But the line that in general you can't know what's in someone's head is just bad epistemology.

It's also inconsistently applied. I took a look through a number of recent NPR stories, to find examples of reported claims that imply something about someone's inner life. It turns out, there are lots of examples where NPR seems willing to make claims that would require "peering into someone's head". Here are a few:

  • Frauke Petry's "political allies are worried enough to have taken stances against migrants and the European Union that sound a lot like AfD's positions." Worry is a feeling. Is NPR able to peer into the heads of those allies?
  • "In response to the order, in Chicago, all remaining detainees were freed after being detained by Customs and Border Protection agents at Chicago O'Hare International Airport Saturday." Here NPR is making assertions about why some people did some things. This depends on their thoughts. What makes NPR so sure that they didn't ignore the order and just coincidentally happen to free them at that moment?
  • "Now many listeners want to know why Kelly didn't just call the president a liar." But to really make this claim one would have to be able to discern the listeners' true intentions. (Maybe they're just asking for NPR to answer that question, but don't want to know the answer!)
  • "It'll soon be the Year of the Rooster, and Yuan Shuizhen is preparing chicken feet in her tiny kitchen for the big meal." The reporter can see her preparing the chicken, and they can see where she's doing it, but can they see what she's doing it for? This depends on her intentions.
  • "Obama oversaw a nation at war every day of his eight-year presidency... However, he tried to deploy a small U.S. military footprint, and the limited air campaigns in Iraq, Syria, Afghanistan, and elsewhere emphasized restraint and patience." In saying that he tried to do something, NPR makes a claim about his inner thoughts; by the standards Kelly articulates, they should have said that Obama took military actions that some people interpreted as an attempt to deploy a small footprint.
  • "Federer watched the replay on the tournament screen, and leaped for joy when it showed his last shot was in." NPR seems willing to peer into Federer's head to divine the emotion behind his leap. If they were being more careful, they might have said that he leapt in a way similar to the way that joyful people sometimes leap.
  • Trump "joked that the senior staff standing near him for the signing had 'one last chance to get out' before they would have to stick to limits on lobbying laid out in the directive." Whether this was a joke depends on the President's intentions.
  • "Trump knows that many parts of Obamacare are popular with the white, working-class voters that put him in office." Knowledge requires belief, and belief depends on one's internal attitudes. Indeed, this knowledge ascription like it might imply enough about Trump's inner life to render certain possible actions (e.g., asserting that no parts of Obamacare are popular with those voters, lies). So if it's possible to know things like this, it should be possible to know about some lies.
My point isn't that any of these are unreasonable ascriptions. They seem perfectly natural, and I think that's right and good. But they reflect a commitment to anti-skepticism about others' minds. Kelly's claim that as a rule, NPR doesn't report on people's thoughts, is false. NPR is employing a more complicated practice—often, they tell us what people are thinking or feeling or trying to do, but not when it comes to whether the President is trying to deceive. This is not a good justification for declining to call things lies. Maybe there's a different good justification, but this isn't it.


  1. Are you aware of the term "principal of charity"?

    Could that be a good reason not to call misstatements of facts "lies", even if you "know" (read: believe), in your commitment to anti-skepticism, that the statements are lies?

    Could it further be the case that Mary Louis Kelly, in her commitment to the principal of charity and in absence of knowing the Trumpster's intentions, chose to use the words she did?

    Could it further be the case that neither NPR nor Kelly ever made the claim that "as a rule, NPR doesn't report on people's thoughts"? And that your claim that Kelly claimed as much is false?

    The cited article makes it clear that what is at issue is not "thoughts in general" but the "deceptive intent".


    1. Thanks for your comment Aaron. I certainly agree that Kelly is choosing her words carefully—she gives a pretty specific justification for them. In that spirit, I'm expressing a disagreement with it.

      I actually think the principle of charity is super interesting in this context. In many ways in which it's developed—certainly in the seminal work by Donald Davidson on the topic—the principle of charity is a matter of ascribing beliefs in a true or reasonable way. So the principle of charity would have it that as a general rule of thumb, people know what they're talking about. That's exactly the opposite of the assumption NPR is making. They are being deliberately skeptical with respect to whether Trump knows what he's doing. I am questioning the motivation for that. And I think the principle of charity works in my favour. But maybe you have a different understanding of the principle of charity?

      You are right that I am interpreting their motivation as instance of a broader epistemic stance about thoughts in general. It seems to be the one suggested by the text, but you're right that they're not super explicit about this. So I guess one good next question would be whether there's reason to be skeptical in particular about deceptive intents, in a way that doesn't apply to thoughts in general. Did you have an idea for why this might be?

  2. "These skeptical arguments do carry some intuitive force, but most of us—us epistemologists, and us people who do things in the world—are committed to their being wrong."

    I'm a simple armchair philosopher, but I don't understand why so many philosophers are dedicated to proving Descartes wrong. I see arguments proposing when it is "appropriate" to claim to know something (e.g. Justified True Belief,) but they all seem to be inadequate on closer examination.

    From what I've observed, in common use the word 'knowledge' means, "I believe [x] to be true to the extent that I no longer question [x's] truthfulness." Knowledge is simply a stronger form of belief, not something that is separate and distinct from belief.

    Why is this not sufficient? Is it because it feels unsatisfactory to admit we don't have knowledge?

    1. There's a lot to say here. I think that knowledge is deeply tied up with action and reasons—so the idea that we don't know anything is tantamount to the idea that we have no reason to do anything, which is a pragmatically disastrous conclusion. But that's controversial.

      Maybe this is a useful thing to say in this context. Even if we don't literally know most of the things we take ourselves to know, it's still very important to credential some kinds of claims as established or secure in a way others aren't. Like, if you're a media outlet like NPR, you don't just publish whatever pops into your head, or whatever you overheard somebody say. You're only supposed to publish stuff that X. I think X="you know", but everybody who believes in the idea of journalistic standards should think this is true for some kind of epistemic X.

      So you don't actually have to run the argument, as I did, in terms of knowledge. Do it in terms of X if you like. It seems like often, outlets like NPR feel comfortable saying lots of things about people's inner states—so they must think those claims have X—but it seems like they think deceitful intentions aren't like that. My question is: why the disanalogy?

      Coming back again to the original question, this is one reason I really do think it's more harmful than people sometimes assume to capitulate to skepticism. If we say one doesn't know anything, it can feel like a short and tempting step to say that all beliefs are equally good. But this is a disaster. To deny this is to establish some epistemic criterion short of knowledge. Fine, go ahead and do so if you like. But a lot of the arguments given in terms of knowledge will be translatable into that framework.

    2. Thank you for taking the time to answer my questions. I really do appreciate it. For the record, I don't have any objections to the theme of your article wrt NPR's justification, but I have long wondered why professional philosophers are so dismissive of skepticism. The arguments I've read against it have been rather unsatisfactory. I hope you don't mind that I took the opportunity to question you about something that is not directly related to your post. (And if you choose not to continue this discussion, I totally understand.)

      " the idea that we don't know anything is tantamount to the idea that we have no reason to do anything..."

      Ahh... I see what you're saying. I disagree with this assertion, as for me belief in something is sufficient reason to act. For example, I don't *know* I'll die if I step in front of that train, but I believe I will; therefore, I choose to not step in front of the train. Knowledge is not required for me to function in the world.

      "Like, if you're a media outlet like NPR, you don't just publish whatever pops into your head, or whatever you overheard somebody say. You're only supposed to publish stuff that X. I think X='you know...'"

      Perhaps the fundamental difference is we're approaching the situation from different directions? It appears you want journalists to "know" the information is true before publishing, therefore you seek some objective definition of knowledge that can be applied. I believe the truthfulness is unknowable in the pure sense, therefore I use other conditions for X while recognizing that whether X has been satisfied is ultimately a subjective evaluation. Your thoughts?

      "If we say one doesn't know anything, it can feel like a short and tempting step to say that all beliefs are equally good."

      How are you defining "good?" I would say "all coherent belief systems that adequately explain the experiences of the individual are equally valid," by which I mean the individual has no basis to claim one is true and the other is false. Whether or not it is "good" (in a moral sense) depends on what standard it's being measured against. I don't see how a belief can be inherently good or bad.

      "But a lot of the arguments given in terms of knowledge will be translatable into that framework."

      Sure, but it also changes the tone of the discussion. Generally speaking, when people make a knowledge claim they do not allow for the possibility they are wrong. Further, there is often the expectation that others must agree with their knowledge claim or else the other is ignorant, stupid, irrational, etc. Reframing what we commonly refer to as "knowledge" as "a belief that I do not question" is not only more accurate, but it forces people to acknowledge the inherent uncertainty of their "knowledge."

      In most everyday encounters this reframing isn't necessary as we tend to associate with people who agree with us about what is known. In my experience it is most useful when there is significant disagreement. Unfortunately, as long as the word "knowledge" carries a significance beyond "a form of belief," it will be very difficult to get people to question their own beliefs and by extension, understand why others believe what they believe.