Last updated on May 24, 2015
The latest issue of the Canadian Journal of Agricultural Economics has an article by Maurice Doyon titled “Can Agricultural Economists Improve their Policy Relevance?”
The article is a summary of Doyon’s presidential address to the 2014 meetings of the Canadian Agricultural Economics Society. In his address, Doyon posits that in order to improve their policy relevance, agricultural economists need to take seriously some of the criticisms which have been directed at economics in general, and some of his recommendations are that:
- We should be more transparent by showing all of our robustness checks,
- We should incorporate insights from other behavioral sciences, and
- We should learn to write for a broader audience.
I don’t disagree with any of those recommendations, but my view is that the many young (i.e., younger than 40 or so) agricultural economists are already doing those things. Indeed,
- Many of us have followed the lead of other applied microeconomists (e.g., labor and development economists in particular) in presenting results that are as transparent as possible and including as many robustness checks as we can imagine in our work,
- Many (though not all) of us now have a healthy appreciation for the insights generated by behavioral economists, and
- Quite a few of us are involved in the popularization of what we do, whether by blogging, writing popular press pieces, or by being actively engaged in social media.
I am not saying Doyon’s remarks miss the mark–many agricultural economists would greatly benefit from following his recommendations–but if I had been the one making those remarks, I would have gone a step further, and my overwhelming recommendation would have been this: In order to enhance their policy relevance, agricultural economists have to do two things: (i) Answer bigger questions, and (ii) Take causal identification seriously.
Answer Bigger Questions
What counts as a minimum publishable unit (i.e., what counts as a scholarly contribution that can be published in a journal) in agricultural economics is much smaller than in other social sciences. My understanding is that this is largely due to the fact that unlike economics departments, which are either housed in business schools or colleges of arts and sciences, almost all agricultural economics departments in the world are housed in agricultural colleges, and so agricultural economists are judged by “hard” scientists for promotion and tenure.
But “hard” scientists tend to publish many, many more articles than we do, and those articles tend to be shorter. So when a promotion and tenure committee in an agricultural college studies an agricultural economist’s CV, they are likely to be take note of the fact that that agricultural economists has “only” about 10 articles–if that. I know of one department, for example, where a 50% research, 50% teaching appointment means you have to have published at least 2.4 articles per year as a necessary condition for tenure! If someone aims to publish only articles of AJAE or comparable quality, that is an almost impossible standard. It also means that if someone were to show up for tenure with three articles in the American Economic Review and three in Econometrica, that person would not get tenure, because at a rate of one article per year, they would have fallen far below the bright-line incentive put in place by the college.
(Before anyone thinks that I am criticizing my own institution: Thankfully, this is not the case at the University of Minnesota, where college-level committees understand that applied economics is different from the “harder” sciences, and that quantity does not mean quality. Likewise at top-ranked programs like my alma mater.)
Thus, the fact that our profession is housed in colleges where our peers in other departments publish many more albeit shorter articles changes the incentive structure, and so the minimum publishable unit gets smaller as a result. But it is probably the case that the smaller the minimum publishable unit, the less interesting it will be to the world at large, and the more one’s contribution is likely to be a small tweak on or a minor extension of somebody else’s work–hardly the stuff of policy relevance.
What’s my solution? First, stronger department heads or chairs who argue forcefully with college-level representatives (remember the adage: “Deans can’t read, but they can count”) that the social sciences are fundamentally different. Second, editors of agricultural economics journals should distinctly go for articles that tackle bigger question, or articles that are distinctly relevant to policy debates.
Take Causal Identification Seriously
This point applies to those of us who do empirical work, but I often get the feeling that agricultural economists are much more enamored with technique than economists in other fields. Whereas the Credibility Revolution in applied microeconomics quickly made its way from labor to development into fields like health economics, law and economics, economic history, and so on, I feel agricultural and applied economists have been more resistant to incorporating the usual methods used to identify causal relationships more cleanly, such as randomized controlled trials, plausibly exogenous instrumental variables, regression discontinuity designs, difference-in-differences designs, and so on. Similarly, many are still likely to invoke features of their theoretical model in lieu of proper empirical identification.
My view is this: Without credible and clean identification, it is hard to talk of policy relevance. Sure, your unidentified, complicated likelihood-based model will crank out an estimate of whether people are more likely to do this or that when so and so holds–cranking out estimates is what empirical models are designed to do. But without proper identification, you’re not sure whether you are getting the right sign of the effect of so and so on whether people do this or that, much less getting the right magnitude. A policy maker might listen to you once, but why would they listen to you a second time if the other guy had a better estimate?
Here, my solution is more practical than in the previous point: Econometrics sequences should go beyond asymptotics and theorem-proving. I get that students need to know the basics of probability and statistics before they get to do econometrics. I get that they should be taught what probits, logits, tobits, Poisson, negative binomial, and hazard models are. But this does not always yield students who know when a result is credibly identified. That is the reason why I decided to develop a cookbook econometrics class for our graduate students–one that is more focused on the craft than on the technique of econometrics.
By the way, this isn’t the first time I make this point–I made the exact same point in 2012 in my keynote to the Economics and Management of Risk in Agriculture and Natural Resources conference.