Skip to content

The Welfare Impacts of Commodity Price Volatility: A Scientific Dialogue

(I have been meaning to write a blog post about this for some time, but it took a very long time for both the articles involved to get published.)

I have blogged a number of times about my 2013 article with Chris Barrett and David Just titled “The Welfare Impacts of Commodity Price Volatility” (see here, here, herehere, and here). About a year ago, Linden McBride, a friend of mine who is currently doing her PhD in applied economics at Cornell (where she is doing some really cool work on policy targeting using random forest algorithms, and where she serves on the editorial team at econthatmatters.com), got in touch regarding a comment she had written on our work.

In her comment, Linden highlights how changing one of the assumptions we make in our 2013 article changes our core qualitative finding. Here is her abstract:

This comment discusses the robustness of the policy implications of Bellemare, Barrett, and Just’s paper, “The Welfare Impacts of Commodity Price Volatility: Evidence from Rural Ethiopia” (2013). Bellemare, Barrett, and Just present a theoretical and empirical approach to the estimation of willingness to pay for food price stabilization that accounts for the covolatility of prices, making a significant contribution to the literature. However, in the course of applying their model to data from rural Ethiopia, the authors make an empirical assumption in the treatment of zero-valued income households that produces a distortion in the distribution of household budget shares. This comment identifies the consequences of this assumption for the estimated relationship of poor and wealthy households’ willingness to pay for food commodity price stabilization, and shows the results one would obtain under a different, distribution-preserving treatment of zero-valued income. The key finding is that the distributional benefit incidence of food price stabilization found in Bellemare, Barrett, and Just (2013) is reversed when the budget share of marketable surplus is calculated over observed, as opposed to mean, household income where available.

As is common practice in journals, we were invited to write a reply to her comments. In our reply, we highlight two things: First, this is a wonderful opportunity to have an actual dialogue about research findings, and it illustrates how no research finding is the absolute last word on any topic (a good sign that someone suffers from chronocentrism is how vocal they get about how their findings are the last word on a given topic).

Second, this highlights both (i) how observational work of a more structural nature (or, more precisely, of a non-reduced form nature) involves a number of small decisions about how to treat outliers, transform certain variables, and so on, all of which can add up to big changes (and makes the cost of any pre-analysis plan increase very quickly) as well as (ii) the value of replication. We conclude:

McBride’s comment illustrates one of the many issues one encounters when dealing with observational (i.e., nonexperimental) data. In Bellemare, Barrett, and Just (2013), not only did we have to make an ad hoc assumption regarding those cases where a household reported a cash income of zero, we also had to make several other assumptions. Among those other ad hoc assumptions, we had to assume that our use of household as well as district-round fixed effects led to estimated coefficients that were unbiased. We also adopted the following assumptions: that relative risk aversion was both constant and the same across all households; homogeneous treatment effects (i.e., marketable surplus elasticities) across all households for all prices, cross-prices, and income. …

But the assumptions one has to make along the way is the stuff empirical work is made of—or, at least, the stuff empirical work that takes theory seriously is made of. As such, any research project of the scope and ambition of Bellemare, Barrett, and Just (2013) will necessarily have to make assumptions like the one McBride is concerned with in her comment. Experimental methods offer an alternative and in several ways superior approach to identifying price risk preferences. To that effect, two of us and a coauthor have been laying the cornerstone of an experimental research program on price risk preferences (Lee, Bellemare, and Just 2015), and our preliminary results have been interesting and, at times, surprising. Even the best experimental evidence, however, is subject to its own set of limitations (if anything, experimental evidence tends to have limited external validity), and so, much like our 2013 article was not our discipline’s last work our price risk, we doubt that experimental work will be our discipline’s last word on price risk.