Skip to content

Marc F. Bellemare Posts

How COVID-19 May Disrupt Food Supply Chains in Developing Countries

I had been meaning to post about this earlier but did not get a chance to do so until today given the decreased productivity those of us with younger children are currently experiencing.

At the request of IFPRI’s new director general Jo Swinnen, Tom Reardon, David Zilberman, and I wrote for a post for the IFPRI blog on the prospective effects of COVID-19 on food supply chains in developing countries. Here are the opening paragraphs:

COVID-19 is spreading through the developing world. Many low- and middle-income countries are now reporting growing numbers of cases and imposing rigorous lockdown regulations in response, which impact all aspects of the economy. How will COVID-19 affect food-supply chains (FSCs) in developing countries?

The evidence suggests that the impacts will be felt widely, but unevenly. Farm operations may be spared the worst, while small and medium-sized enterprises (SMEs) in urban areas will face significant problems. Governments will have to develop policies to respond to these varied impacts to avoid supply chain disruptions, higher food prices, and severe economic fallout for millions of employees.

You can find the rest of the post (with translations in French and in Spanish) here.

‘Metrics Monday: Peter Kennedy, Judea Pearl, or Both?

Steve writes:

Marc,

I have followed your blog silently for a while now and always appreciate your approach to applied econometrics. I have a question for you if you will indulge me. I am a fan of Peter Kennedy ever since his “Sinning in the Basement” article and its inclusion in his Guide to Econometrics. Peter’s chapter on applied economics has shaped my econometric and data science teaching daily ever since. Last week I discovered Judea Pearl’s Book of Why. So I set out to see what I had been missing. I discovered Pearl wrote a paper on Haavelmo who I read at Ohio State over 40 years ago. I discovered exchanges between Pearl and Guido Imbens … I began to search for Peter Kennedy and Judea Pearl. Your blog entry … is one of the few google hits that came up for that search.

As you mention you are a fan of both Kennedy and Pearl, here are the questions that are on my mind: Are there links between Kennedy’s guidance on how to do applied economics and Pearl’s how to do causality? Do the two together make one a better applied econometrician? Do the two together constitute something business cares about?

I wrote a piece on what Kennedy’s rules mean from an ethical point of view. It is going to be published by O’Reilly Media in a collection: 97 Things About Ethics Everyone In Data Science Should Know, edited by Bill Franks. You can find my entry at https://econdatascience.com/ethics-rules-in-applied-econometrics-and-data-science/

My answers:

Are there links between Kennedy’s guidance on how to do applied economics and Pearl’s how to do causality? 

A little bit, but not that many. Kennedy was largely concerned with ethics (nowadays, he’d write about replicability, transparency, pre-analysis plans, and so on) and he was writing pre-Credibility Revolution (so before causality was “a thing”), whereas Pearl is largely concerned with making explicit the assumptions that can lead to causal inference, and he is clearly a major actor of the Credibility Revolution–if not in economics, at least clearly outside of it. I think the link is mainly this: Kennedy would say that you need to be clear about the limitations of your work; Pearl would say that you need to be clear about your assumptions, and about stating explicitly the causal model you have in mind. I see limitations as related to those assumptions.

Do the two together make one a better applied econometrician? 

Yes, no doubt. Kennedy is all about observational data and about providing clear, intuitive, math- and jargon-free (as much as possible) guidance about how to do good empirical work, but ignoring causality. Pearl is all about causality. As an economist, I’d recommend Kennedy as well Angrist and Pischke more than I’d recommend Kennedy and Pearl, or I’d substitute Morgan and Winship for Pearl, since their book is more easily accessible to economists. (Note: This is not a knock on Judea Pearl, whose contribution to our understanding has been nothing short of colossal. I just find that his Causality book is overkill for most economists.)

Do the two together constitute something business cares about?

It depends on what allows maximizing profit—prediction or inference. If it’s prediction, then no: they should be all about machine learning methods. If it’s about causal inference, then yes, because the Kennedy and Pearl together will make you a better user of data and econometric methods, and a sharper thinker on identification and (structural) assumptions. You might think “Does business really do causal inference?” I know economists who work for Amazon, and it turns out the answer is “Yes,” at least for Amazon, which apparently has a number of people working on causal inference. (Though from a microeconomic-theoretic perspective, I imagine the ability to invest in that type of research is facilitated by extra-normal profits, as with any old R&D activity!)

‘Metrics Monday: Assessing the Extent of SUTVA Violations

I will be teaching the last quarter (i.e., half-semester) of our first-year graduate econometrics sequence this year. This of course means that I will be teaching causal inference.

To do so, I am using the second edition of Morgan and Winship’s wonderful Counterfactuals and Causal Inference, which features a wonderful discussion of the stable unit treatment value assumption (SUTVA).

Many people who were trained in econometrics prior to the Credibility Revolution are not familiar with the acronym SUTVA, and even the full name “stable unit treatment value assumption” can sound more confusing than not. In economics, people sometimes refer to it as the “no-macro-effect” or “partial equilibrium” assumption.

What SUTVA says, basically, is that for a treatment D and and outcome Y, the value of D for individual i in time period t should should not have any effect on the value of Y for individuals who are not i in any time period t or for individual i in any time period that is not t.

Put more simply: That individual i gets treated in period t should have no effect on any other individual’s outcome at any given time, nor should it have any effect on that individual’s outcome in other time periods.

Put yet more simply: There should not be any spillovers.

The SUTVA can be extremely difficult to satisfy, and as with many other assumptions, though it might be feasible to rule out certain types of SUTVA violations, it may be difficult if not impossible to rule them all out.

In Bellemare and Nguyen (2018), for instance, we were interested in the relationship between farmers markets and food-borne illness in a given state in a given year. In an attempt to rule out contemporaneous spillovers from neighboring states, we controlled for the average number of farmers markets in neighboring states, but this did not help with any potential spillovers from year to year, or across states from year to year, no matter how unlikely they are.

In preparing my other graduate class–microeconomics of agricultural development–this semester I read an article which does a wonderful job of testing for SUTVA violations. In their 2019 article investigating the puzzle of “sell-low, buy-high” behavior (i.e., the phenomenon whereby smallholders sell their crops at low prices around harvest time, only to buy the same commodities later in the year at high prices), Burke et al. test for SUTVA violations by randomly varying the intensity of a randomly assigned treatment.

This double randomization allows first to estimate the impact of their treatment, which consists of a loan at harvest time, and then to estimate the impact of treatment spillovers. The idea behind the latter is that if SUTVA holds, the estimate of the treatment effect should be invariant to how many people receive a loan within a given community.

Burke et al.’s findings are telling: When few people are treated in a given community, receiving a loan at harvest reduces the extent of “sell-low, buy-high” behavior and increases the welfare of smallholders via an increased use of storage. But when many people are treated in a given community, smallholders are not significantly better off, since the use of storage is not more profitable.

As I said above, testing whether SUTVA holds can be extremely difficult, if not impossible. Burke et al. randomly varied treatment intensity to get at whether the SUTVA held, but not everyone can do so. Testing whether SUTVA holds can be particularly difficult with observational data. But this need not doom one’s findings. One way out of this is to admit that one cannot test for SUTVA, and that one’s treatment effect estimate should hold for “similar situations,” which ultimately limits external validity.

In Burke et al.’s case, had they not varied treatment intensity and only offered loans to small proportions of smallholders in each community, this would have meant saying that the treatment effect should hold in other situations where only a small proportion of smallholders are treated in each community.