Skip to content

Trivial Confirmations of the Obvious?

An op-ed by Jacqueline Stevens a few weekends ago in the New York Times made a lot of waves. In it, Stevens — a professor in the political science department at Northwestern University — essentially declares herself in favor of eliminating National Science Foundation funding for political science research.

Her reason? Political scientists are lousy forecasters.

This post not going to be a response to Jacqueline Stevens. GWU’s Henry Farrell has a great response here, Stanford’s James Fearon — whose work is singled out by Stevens as the type of work she dislikes — has his own response here, and forecaster extraordinaire Jay Ulfelder responds here.

What I am going to take issue with here instead is a two-sentence excerpt. Indeed, in her op-ed, Stevens writes of empirical research in political science that

Many of today’s peer-reviewed studies offer trivial confirmations of the obvious (…). I look forward to seeing what happens to my discipline and politics more generally once we stop mistaking probability studies and statistical significance for knowledge.

Trivial confirmations of the obvious, really?

Have Beliefs, Will Travel

This reminds me of a comment I received from an anonymous referee a while ago on a paper I had submitted for publication.

In that paper, I was trying to show that a variable X caused outcome Y. To be sure, X and Y were correlated. But this didn’t mean that X caused Y. So I worked extra hard to convince the reader that my research design effectively identified the causal relationship flowing from X to Y. And in that case, knowing whether X caused Y was actually pretty important for policy.

The specific comment I received from one of the referees was:

I question the rationale for the paper – i.e. do we really need statistical evidence on this question?

Here’s how I feel inside when I get comments like that:

Portrait of the Artists as a Stung Man.

In this case, as in the case of Stevens op-ed, “trivial confirmations of the obvious” and “Do we really need statistical evidence on this question?” really lies in the eye of the beholder.

And the beholder believing that X causes Y does not make it so. After all, there are people who think that there is a greater than 50 percent chance that red will come up at roulette after black came up four times in a row, a belief that is demonstrably false.

Establishing a causal relationship can be far from trivial when the relationship in question involves human beings (those pesky people don’t behave like molecules and have a mind of their own, how dare they?) It’s hard work. In most cases, we do need statistical evidence. This (not exactly) just in: Correlation is not causation.

And in cases where statistical work actually does confirm the obvious, there is still a great deal of value in knowing the exact magnitude of the impact of X on Y. Rational agents take decisions by carefully comparing costs and benefits.

So yeah, I guess we all know that a college degree means a higher income in the future. But before someone spends $41,592 per year on a college degree, they might want to have an idea of how much money they will make once they start working.