Suppose I have a bag with a hundred balls in it. The balls are made either of wood or iron; I don't know how many there are of each, but I know that there's at least one iron ball and at least one wooden ball. I start out with no idea whether there are more iron balls or more wooden balls.
Then I examine a ball, and see that it's iron. How should this impact my estimate of the number of iron balls? It's tempting to think it obvious that my estimate should go up—I've seen an iron ball, so that's at least some evidence in favour of there being a lot of iron balls in there.
But not so fast. It all depends on how the ball was chosen. It depends a lot on how the ball was chosen. If the ball was pulled out at random and proved to be iron, that'd be evidence for more iron in the bag. But what if it was pulled out with a magnet? Then the selection method guaranteed it was going to produce an iron ball. And I already knew there was at least one iron ball in there. So the fact that I got an iron ball gives me no new information whatsoever. My estimate of how much iron there is in the bag should change not at all, if that's how the iron ball got selected.
I think something like this situation, and its attendant tempting fallacy, applies in much more real-world cases with morally significant implications, too. In particular, I'm thinking today about estimates of the prevalence of bad behaviour among certain demographic groups. For example, there are many people who, when thinking about anti-Trump protesters, think of unruly mobs of violent people. There are also many people (mostly different ones) who, when thinking of America this month, think of people committing hate crimes against Muslims and black people.
If challenged about why one should think anti-Trump protesters tend to be violent, or that there are a lot of hate crimes in America, people tend to point to examples: news features about anti-Trump protesters attacking Trump supporters, or about threats of lynchings against black teenagers. I think that most of the time, such news stories are terrible evidence for what they're being used as evidence for. Even setting aside questions about whether the stories are true—let's assume they are—they do not actually provide any evidence at all for their general conclusion. The reason for this is that the cases are structurally analogous to the magnetic drawing method described above.
The world is a big place, full of all kinds of people. So even before we do any serious investigation at all, we know that there are some people who commit hate crimes against black people, and that there are some anti-Trump protesters who are violent. We should all agree that this is obvious. There are some people like that. We disagree about how prevalent these things are, but we all agree they exist.
We also know some things about the media. Namely, that, since lots of people are interested in reading about violent anti-Trump protesters and hate crimes, various media sources will be motivated to find and report on at least some such cases. Furthermore, the media are good enough at finding these things that it's nearly certain they'll do so.
In other words, the antecedent probability of there being some cases in question is practically 1, and the conditional probability of the media reporting on them, supposing they exist, is also practically 1. So when you read about a Trump protester being violent, that should increase your estimate of how violent Trump protesters on the whole are by practically zero. It's like the magnet pulling out the iron ball—it was going to find one, no matter how many there were, so the fact that it found one does not make it likelier that there are lots.
Things would be different if the media worked very differently—if, for example, it picked Trump protesters at random, and then reported on what they were like, no matter what they found. If that is what happened, then reading reports of violent protesters would be significant evidence. But that is very unlike the way our actual media works.
Our own anecdotal experiences actually work a bit (a bit!) better in this respect. If the anti-Trump protester you happen to be standing next to starts beating somebody up, that is evidence in favour of violence in that group. (It wouldn't be if in advance you'd somehow implemented a strategy of standing next to the person you thought likeliest to be violent.) For what it's worth, I spent several hours at anti-Trump rallies in Philadelphia last week. I observed no violence.
I haven't seen comparative numbers about the frequency of hate crimes in America over the past week. As far as data goes, that would be the gold standard. But I am pretty confident they've gone up, probably by a lot. This isn't based on the many news stories I've read about examples—the USA is a big enough place that it's not implausible to me that there are dozens of such cases every week, and that they're being reported and disseminated more now. But I have more specific, more personal experiences that are a bit more similar to random sampling. I haven't been victimized myself, but I do personally know someone who was physically attacked and racially insulted. And, at the university where I happened to be visiting last week, black students had that very day been threatened with lynchings.
This isn't super definitive data, but I think it's a lot more telling than lists of stories turned out by media outlets motivated to turn out lists of stories. The main point is, if you want to know how much something supports a given hypothesis, it makes a huge difference how you found it.
No comments:
Post a Comment