These three papers were published last week in the August 2011 issue of the Journal of African Economies and they focus by and large on a possible marriage between structural models and experimental methods. By combining the empirical strength provided by randomization with research questions that go beyond simple impact evaluation, I believe this will represent a very important area for future research, and I am myself involved in a project of this sort.
The first paper is by David McKenzie and is titled “How Can we Learn Whether Firm Policies Are Working in Africa? Challenges (and Solutions?) for Experiments and Structural Models”:
“Firm productivity is low in African countries, prompting governments to try a number of active policies to improve it. Yet despite the millions of dollars spent on these policies, we are far from a situation where we know whether many of them are yielding the desired payoffs. This article establishes some basic facts about the number and heterogeneity of firms in different Sub-Saharan African countries and discusses their implications for experimental and structural approaches towards trying to estimate firm policy impacts. It shows that the typical firm programme such as a matching grant scheme or business training programme involves only 100 to 300 firms, which are often very heterogeneous in terms of employment and sales levels. As a result, standard experimental designs will lack any power to detect reasonably sized treatment impacts, while structural models which assume common production technologies and few missing markets will be ill-suited to capture the key constraints firms face. Nevertheless, I suggest a way forward which involves focusing on a more homogeneous sub-sample of firms and collecting a lot more data on them than is typically collected.”
The second paper is by Glenn Harrison and is titled “Randomization and Its Discontents”:
“Randomized control trials have become popular tools in development economics. The key idea is to exploit deliberate or naturally occurring randomization of treatments in order to make causal inferences about ‘what works’ to promote some development objective. The expression ‘what works’ is crucial: the emphasis is on evidence-based conclusions that will have immediate policy use. No room for good intentions, wishful thinking, ideological biases, Washington Consensus, cost-benefit calculations or even parametric stochastic assumptions. A valuable byproduct has been the identification of questions that other methods might answer, or that subsequent randomized evaluations might address. An unattractive byproduct has been the dumbing down of econometric practice, the omission of any cost-benefit analytics and an arrogance towards other methodologies. Fortunately, the latter are gratuitous, and the former point towards important complementarities in methods to help address knotty, substantive issues in development economics.”
The third paper is by Leonard Wantchekon and Jenny Guardado and is titled “Methodology Update: Randomized Controlled Trials, Structural Models, and the Study of Politics”:
“This paper explores how the combined use of Randomised Controlled Trials (RCTs) and Structural Models can improve the study of politics. We posit that randomized controlled trials can benefit from the insights provided by structural models, particularly for the type of questions posed in Political Science. Although structural models have been utilized scarcely in politics, the close relationship between theory and empirics required by structural models would help solving many of the current pitfalls of RCTs in political science. For instance, this approach can alleviate concerns of external validity often associated with experimental evidence. We finally present a real political science example to illustrate the implementation of this approach.”