Although I had seen the Glennerster and Kremer article in the Boston Review last week, I had saved it for later, as I was planning on reading it carefully so as to possibly assign it as an introductory reading in the development seminar I teach in the fall.
In a recent blog post, Chris Blattman has excellent thoughts on the article and on randomized controlled trials (RCTs) in general:
“I think my response would have differed on a few points:
- The profession’s efforts to register trials, publish null results, and replicate trials is pretty weak so far, with all incentives stacked against it, and this will need to change to make serious progress
- Every trial ought to be registered, and economics journals ought to enforce the practice
- We should welcome unregistered sub-group or post-hoc analysis, so long as it’s clearly labelled as such, and all the sub-group and post-hoc hypotheses tested are disclosed
- We need to stop finding a robust empirical result, writing a model that is consistent, then putting the model at the beginning of the paper and calling the empirical work a test (I throw up in my mouth a little every time I see this)
- Yes, observational research is often much worse, but experiments should take the high road against the worst practices, rather than simply pointing to a road lower than its own
- The greatest advantage economics holds is theory, and it ought to be wielded more productively in experimental design (especially the hundreds of atheoretical searches for significance that characterize most program evaluations in the big development agencies).”
I am especially sympathetic to points 4 and 6. Over the past 50 years, economists have devised complex theoretical models which often yield counterintuitive results. Since RCTs have the advantage of offering clean identification, why not start testing structural models with the data they provide?
This would be an important first step in refining economic theory so as to make it more useful for policy — and inuring it against the “econ isn’t a science” criticism. Although economists have been pretty good at using experimental data to test theories, I’m not sure we have been as good at modifying our theories in light of experimental data. In most cases, I feel like we are too quick to point to “anomalies.”