Skip to content

Replication, Publication Bias, and Negative Findings

Last updated on May 16, 2012

I came across fascinating read on some of the important problems that plague the scientific process in the social sciences and elsewhere. From an article by Ed Yong in the May 2012 edition of Nature:

Positive results in psychology can behave like rumours: easy to release but hard to dispel. They dominate most journals, which strive to present new, exciting research. Meanwhile, attempts to replicate those studies, especially when the findings are negative, go unpublished, languishing in personal file drawers or circulating in conversations around the water cooler. “There are some experiments that everyone knows don’t replicate, but this knowledge doesn’t get into the literature,” says Wagenmakers. The publication barrier can be chilling, he adds. “I’ve seen students spending their entire PhD period trying to replicate a phenomenon, failing, and quitting academia because they had nothing to show for their time.” (…)

One reason for the excess in positive results for psychology is an emphasis on “slightly freak-show-ish” results, says Chris Chambers, an experimental psychologist at Cardiff University, UK. “High-impact journals often regard psychology as a sort of parlour-trick area,” he says. Results need to be exciting, eye-catching, even implausible. Simmons says that the blame lies partly in the review process. “When we review papers, we’re often making authors prove that their findings are novel or interesting,” he says. “We’re not often making them prove that their findings are true.”

I have briefly discussed the lack of replication in economics here, but in short, the issue is that once a finding is published, there are practically no incentives for people to replicate those findings.

There are two reasons for this. The first is that journals tend to want to publish only novel results, so even if you manage to confirm someone else’s findings, there will be few takers for your study unless you do something significantly different… in which case you’re no longer doing replication.

The second is the tendency to publish only studies in which the authors find support for their hypothesis. This is known as “publication bias.”

For example, suppose I hypothesize that the consumption of individuals increases as their income increases, and suppose I find support for that hypothesis using data on US consumers. This result eventually gets published in a scientific journal. Suppose now that you decide to replicate my finding using Canadian data and you fail to replicate my findings. Few journals would actually be interested in such a finding. That’s because failing to reject the null hypothesis in a statistical test is not surprising (after all, you’ve staked 90, 95, or 99 percent of the probability mass on the null hypothesis that consumption is not associated with income), but also because, as Yong’s article highlights, that would not exactly be an “exciting, eye-catching” result.

I am currently dealing with such a “negative finding” in one of my papers, in which I find that land titles do not have the positive impact on productivity posited by the theoretical literature in Madagascar, a context where donors have invested hundreds of millions of dollars in various land titling policies. Perhaps unsurprisingly, the paper has proven to be a very tough sell.

(HT: David McKenzie.)