Skip to content

Thoughts on the RCT Debate

Last updated on May 27, 2013

Last weekend, Nicholas Kristof published a column in the New York Times in which he praised the use of randomized controlled trials (RCTs) in development policy. In a fit of econ envy, Kristof even went so far as to confess that if he had to do it all over again, he would major in economics in college instead of political science.

As a result of Kristof’s column, however, the use of RCTs in development policy has come under a considerable amount of scrutiny in the development blogosphere.

On the one hand, most economists seem to agree that RCTs are a good statistical tool, but they do not allow answering all the interesting research. This is not new, as the last five years have seen a number of papers published which criticize RCTs: Ravallion (2009)Deaton (2010)Barrett and Carter (2010), etc.

Moreover, for economists, RCTs are old news as the paper that started it all was published more than seven years ago in Econometrica. Chris Blattman goes so far as to advising young development economists to move away from RCTs, a piece of advice that I agree with whole-heartedly.

On the other hand, other social scientists are now taking note of the fact that there is a new sheriff in town when it comes to standards of statistical identification. The “credibility revolution” that took place in economics over the last 15 years is now spilling over to other social sciences, perhaps as a result of the methodological convergence I believe is taking place in the social sciences.

For example, in a post written in reaction to Kristof’s column, my friend Ed Carr writes:

“To me, RCT4D [Note: Ed and others often talk of RCTs for development, or RCT4D] work is interesting because of its emphasis on rigorous data collection –certainly, this has long been a problem in development research, and I have no doubt that the data they are gathering is valid. (…) One of the things that worries me about the RCT4D movement is the (at least implicit, often overt) suggestion that other forms of development data collection lack rigor and validity.”

In Defense of RCTs

I never thought I would have to defend RCTs. Not only am I more familiar with observational data, I am also about to run my very first RCT. Unfortunately, Ed has it wrong here.

The emphasis on rigorous data collection in economics began long before the rise of RCTs. In development economics specifically, Angus Deaton was doing just that before I was even born, as did people like Andrew Foster, Mark Rosenzweig, John Strauss, Chris Udry, and many others in the wake of Deaton’s work. And even Deaton, who literally wrote the book on rigorous data collection, would admit that he was not the first to conduct rigorous data collection in development economics.

Ed then adds:

“This understanding of qualitative research stands in stark contrast to what is in evidence in the RCT4D movement.  For all of the effort devoted to data collection under these efforts, there is stunningly little time and energy devoted to explanation of the patterns seen in the data.  In short, RCT4D often reverts to bad journalism when it comes time for explanation.”

True. But the point of RCTs is not to explain why things work but to test whether something actually works. In other words, RCTs are good at establishing the truth value of causal statements. Does giving children deworming drugs cause an improvement in school attendance and in test results? If so, by how much exactly?

When running RCTs, development economists are often interested in whether something works, not in how it works. And to be honest, I find it comforting that the science behind deworming drugs is left to noneconomists.

More Fruitful Criticisms of RCTs

One can formulate criticisms of RCTs that are considerably more fruitful than the ones Ed makes in his post. One such criticism is the fact that many RCTs suffer from implementation bias.

That is, researchers go into a community to run an RCT, but the implementation of the RCT is left to an NGO that is both well-known and trusted by the community. Is it any surprise then that the intervention works? Short of randomizing over who implements the intervention, we have no way of knowing this.

Another such criticism is the fact that insisting on RCTs is like the drunk looking for his car keys under the street lamp when he knows he lost them elsewhere, because the only place he can actually see is under the street lamp. Many interesting research questions are not randomizable, but this should not prevent us from asking them. A good example of that is the work done by Daron Acemoglu, Simon Johnson, and Jim Robinson (or any combination thereof) on institutions.

If social scientists outside of economics want to justify their approach, I believe this last criticism is the most useful. For me, the strength of qualitative research has always been that it allows formulating hypotheses that one would not be able to formulate with quantitative research. It is then to the quantitative researchers to test those hypotheses while giving credit to the qualitative researchers. In most cases, the best available means of testing such hypotheses will involve RCTs.