Skip to content

Category: Uncategorized

Was Sandmo Right? How Do Producers React to Price Uncertainty in the Lab?

That is the topic of a new working paper of mine, written with my PhD student Yu Na Lee and my collaborator David Just, and which we titled “Was Sandmo Right? Experimental Evidence on Attitudes to Price Risk and Ambiguity.”

This is probably the most exciting research project I have ever had a chance to work on, and I wish we were releasing a more polished and less rough draft, but the submission deadline for presented papers at this summer’s AAEA meetings in San Francisco and the association’s policy of posting papers online also being what they are, the paper is now available online, so we might as well talk about it. So a caveat is in order: This is a rough and preliminary version of a paper that will almost surely look very different once it is published. It has not yet gone through the peer-review process. Keep that in mind as you read this post.

In this paper, Yu Na (who will be on the market in two years), David, and I decided to build on the results in my award-winning 2013 AJAE article with Chris Barrett and David Just, where we estimated the effects of price volatility on the welfare of rural Ethiopian households. The results in the 2013 paper, however, relied on survey data, which are both noisy and not conducive to the cleanest of identification strategies. So this time around, we decided to gun for the gold standard and turn to the lab.

You Can’t Test for Exogeneity: Uninformative Hausman Tests

One thing I often see authors doing in some of the papers I get to handle as an editor or comment on as a reviewer is using a Durbin-Wu-Hausman test–for the sake of brevity, I will just say “Hausman  test” throughout this post–to test for exogeneity. The idea for this post came from my reading of Jeff Woolridge’s article on the control function approach (more on that approach in a moment, and yes, the same Wooldridge who wrote what is perhaps the best microeconometrics text on the market) in the latest issue of JHR.

Typically, this is done in an effort to argue that some variable of interest is not really endogenous to the outcome of interest, and it proceeds as follows (note that I am describing a situation where researchers have to rely on observational data):

The Chance Result the Whole World Yearned To Believe

My colleagues and I recruited actual human subjects in Germany. We ran an actual clinical trial, with subjects randomly assigned to different diet regimes. And the statistically significant benefits of chocolate that we reported are based on the actual data. It was, in fact, a fairly typical study for the field of diet research. Which is to say: It was terrible science. The results are meaningless, and the health claims that the media blasted out to millions of people around the world are utterly unfounded.

From a fascinating article with the click-baity title “I Fooled Millions Into Thinking Chocolate Helps Weight Loss. Here’s How,” by John Bohannon on io9.

“But wait,” you say, “if this was all based on actual data and the finding wasn’t false, why are the findings meaningless?” Because of this: