Skip to content

Marc F. Bellemare Posts

Global Agricultural Value Chains and Food Prices

That is the title of a new working paper by Bernhard Dalheimer (currently a postdoc in our department, but headed to Purdue, where he will start in the fall as an assistant professor), Sunghun Lim (Louisiana State), and me.

We ask a simple question: How does the extent of a country’s participation in global agri-food value chains (GAVCs, or “GA-vicks”) translate in terms of food price levels and food price volatility?

Fixed Effects and Causal Inference

That is the title of a new working paper by Dan Millimet and me. If memory serves, the genesis of this paper was an exchange Dan and I had on Twitter where we both remarked that, with panel data, adding more rounds of data is not necessarily better if the goal is to identify a causal relationship, because the amount of stuff, both observed and unobserved, that remains constant over time (in other words, what is controlled for by unit fixed effects) decreases as the data grows to cover a longer time period.

Given that, it is surprising that the fixed effects (FE) estimator has emerged as the default estimator to use when trying to identify a causal relationship with longitudinal data. Even Yair Mundlak, who developed the FE estimator to control for management bias when estimating agricultural production functions, recognized that stuff is only time-invariant when looking at short periods when he wrote, in his original 1961 then-Journal of Farm Economics, now-American Journal of Agricultural Economics article, that (emphasis added)

Survey Ordering and the Measurement of Welfare

That’s the title of a new working paper by Wahed Rahman, Jeff Bloem, and me in which we randomly place the module asking survey respondents about their assets either near the beginning (treatment) or at the end (control) of the survey to see whether the latter introduces classical (i.e., noise) or non-classical (i.e., bias) measurement error.

On average, we have a null finding. That is, whether we ask respondents about their assets early or late in the survey introduces neither classical nor non-classical measurement error. But we do find some interesting treatment heterogeneity in that respondents from larger households (i.e., households with more than four individuals) and with a low level (i.e., fewer than six years) of formal education tend to underreport assets when asked about them later in the survey.

One caveat: We are assuming that this is happening because of survey fatigue, and so that the “right” number of assets and the “right” value of those assets is given by respondents who are asked about them earlier. Unfortunately, we have no way of testing whether survey fatigue is the mechanism here. The fact that we find a null finding on average, combined with the fact that our survey was relatively short (i.e., 75 minutes) lends credence to the idea that survey fatigue is what is driving our sub-sample results.

Here is the abstract:

Social and economic policy and research rely on the accurate measurement of welfare. In nearly all instances, measuring welfare requires collecting data via long household surveys that are cognitively taxing on respondents. This can lead to measurement error, both classical (i.e., noisier responses) and non-classical (i.e., biased responses). We embed a survey ordering experiment in a relatively short survey, lasting just over 75 minutes on average, by asking half of our respondents about their assets near the beginning of the survey (treatment) and asking the remainder of our respondents about their assets at the end of the survey (control). We find no evidence that survey ordering introduces classical or non-classical measurement error in either the number of reported assets or the reported asset value in the full sample. But in sub-samples of respondents who (i) are from larger (i.e., more than four individuals) households, or (ii) have low levels (i.e., fewer than six years) of education, we find evidence of differential reporting due to survey ordering. These results highlight important heterogeneity in response bias which, despite the null effect in the full sample, can be meaningful. For example, for respondents from larger households, placing the asset module near the beginning of the survey leads to a 23 percent increase in the total reported asset value relative to placing the same module at the end of the survey.