Skip to content

“Big Questions, Not Project Evaluations”: Blattman on Impact Evaluation

This week in my development seminar, we will be discussing the background ideas and methods proper to development microeconomics.

In order to do so, and to make sure that everyone has a clear understanding of what’s at stake, I must make a necessary digression about the use of linear regression as well as about the idea of causality in the social sciences.

As such, a recent post on impact evaluation by Chris Blattman turns out to be quite timely:

“My point in 2008: to talk about how impact evaluations could better serve the needs of policymakers, and accelerate learning.

Frankly, the benefits of the simple randomized control trial have been (in my opinion) overestimated. But with the right design and approach, they hold even more potential than has been promised or realized.

I’ve learned this the hard way.

Many of my colleagues have more experience than I do, and have learned these lessons already. What I have to say is not new to them. But I don’t think the lessons are widely recognized just yet.

So, when asked to speak to DFID again yesterday (to a conference on evaluating governance programs), I decided to update a little. They had read my 2.0 musings, and so the talk was an attempt to draw out what more I’ve learned in the three years since.

The short answer: policymakers and donors — don’t do M&E, do R&D. It’s not about the method. Randomized trails are a means to an end. Use them, but wisely and strategically. And don’t outsource your learning agenda to academics.”

Here are Chris’ slides for his DFID talk.

His point is that agencies like DFID should focus on funding research that looks at big questions about human behavior rather than research that simply asks whether specific interventions have any impact.

Another, highly underappreciated point Chris makes is that everyone who does impact evaluation makes implicit assumption about how the world works, but very few actually go so far as to state those assumptions — a point I have often heard made about randomized controlled trials by those who are more structurally minded.

I believe this is exactly the way to reconcile randomistas with relatively more structural empirical researchers. I also believe that many convenient assumptions — for example, the assumption that the principal leaves no surplus to the agent in a standard principal-agent model, to take an example I know well — will not hold up to such scrutiny.