Skip to content

Category: Impact Evaluation

Identifying Causal Relationships vs. Ruling Out All Other Possible Causes

Portrait of Artistotle (Source: Wikimedia Commons.)

I was in Washington last month to discuss my work on food prices, in which I look at whether food prices cause social unrest, at an event whose goal was to discuss the link between climate change and conflict.

As many readers of this blog know, disentangling causal relationships from mere correlations is the goal of modern science, social or otherwise, and though it is easy to test whether two variables x and y are correlated, it is much more difficult to determine whether x causes y.

So while it is easy to test whether increases in the level of food prices are correlated with episodes of social unrest, it is much more difficult to determine whether food prices cause social unrest.

In my work, I try to do so by conditioning food prices on natural disasters. To make a long story short, if you believe that natural disasters only affect social unrest through food prices, this ensures that if there is a relationship between food prices and social unrest, that relationship is cleaned out of whatever variation which is not purely due to the relationship flowing from food prices to social unrest. In other words, this ensures that the estimated relationship between the two variables is causal. This technique is known as instrumental variables estimation.

Identifying Causal Relationships vs. Ruling Out All Other Causes

As with almost any other discussion of a social-scientific issue nowadays, the issue of causality came up during one of the discussions we had at that event in Washington. It was at that point that someone implied that it did not make sense to talk of causality by bringing up the following analogy:

Slides of My Keynote Lecture at Last Weekend’s “Economics and Management of Risk in Agriculture and Natural Resources” Conference

I was trained as an agricultural and applied economist, so I have spent a lot of time doing research on risk as it relates to agriculture and development (see here and here for published articles).

Because of this, I have been involved with the annual Economics and Management of Risk in Agriculture and Natural Resources conference for the past few years.

I first presented at that conference in 2009, and since I had then volunteered to organize the conference, I was in charge of the conference program in 2010 and of logistics in 2011.

This year, I was asked to give the keynote lecture, in which I chose to discuss what the “credibility revolution” that took place in economics over the past ten years or so — which has lead to economists to adopting stricter standards of evidence and of statistical identification — means for agricultural and applied economics as a field.

In case you have an interest in this topic, I am making the slides of my keynote lecture are available. I think the content of those slides is especially relevant for current graduate students of agricultural and applied economics.

The Economics and Management of Risk in Agriculture and Natural Resources conference is usually held somewhere on the Gulf Coast. This year, it was held in Pensacola, FL. I took the picture on top of this post while walking along the beach early Saturday morning.

Randomization and Inference

Experiments have become an increasingly common tool for political science researchers over the last decade, particularly laboratory experiments performed on small convenience samples. We argue that the standard normal theory statistical paradigm used in political science fails to meet the needs of these experimenters and outline an alternative approach to statistical inference based on randomization of the treatment. The randomization inference approach not only provides direct estimation of the experimenter’s quantity of interest — the certainty of the causal inference about the observed units — but also helps to deal with other challenges of small samples. We offer an introduction to the logic of randomization inference, a brief overview of its technical details, and guidance for political science experimenters about making analytic choices within the randomization inference framework. Finally, we reanalyze data from two political science experiments using randomization tests to illustrate the inferential differences that choosing a randomization inference approach can make.

That’s the abstract of a forthcoming American Journal of Political Science article by Luke Keele, Corrine McConnaughy, and Ismail White.

That being said, I really can’t wait for summer to arrive so I can finally get through my “Documents to Read” folder.