Econometrics Teaching Needs an Overhaul

Via Matt Bogard, who has a really good post up titled “Linear Literalism and Fundamentalist Economctrics,” the World Economic Forum website has an interesting piece of popular-press econometrics (!) by Angrist and Pischke titled “Why Econometrics Teaching Needs an Overhaul.” Some choice excerpts:

Hewing to the table of contents in legacy texts, today’s market leaders continue to feature models and assumptions at the expense of empirical applications. Core economic questions are mentioned in passing if at all, and empirical examples are still mostly contrived, as in Studenmund (2011), who introduces empirical regression with a fanciful analysis of the relationship between height and weight. The first empirical application in Hill, Griffiths, and Lim (2011: 49) explores the correlation between food expenditure and income. This potentially interesting relationship is presented without a hint of why or what for. Instead, the discussion here emphasises the fact that “we assume the data… satisfy assumptions SR1-SR5.” An isolated bright spot is Stock and Watson (2011), which opens with a chapter on ‘Economic Questions and Data’ and introduces regression with a discussion of the causal effect of class size on student performance. Alas, Stock and Watson also return repeatedly to more traditional model-based abstraction.

The disconnect between econometric teaching and econometric practice goes beyond questions of tone and illustration. The most disturbing gap here is conceptual. The ascendance of the five core econometric tools–experiments, matching and regression methods, instrumental variables, differences-in-differences and regression discontinuity designs–marks a paradigm shift in empirical economics. In the past, empirical research focused on the estimation of models, presented as tests of economic theories or simply because modelling is what econometrics was thought to be about. Contemporary applied research asks focused questions about economic forces and economic policy.

I have argued in the past in favor of teaching the craft in addition to the technique of econometrics, both here and here. Here is a bit more of Angrist and Pischke’s piece, and I emphasize a bit I disagree with:

The unapologetic focus on causal relationships that’s emblematic of modern applied econometrics emerged gradually in the 1980s and has since accelerated. Today’s econometric applications make heavy use of quasi-experimental research designs and randomized trials of the sort once seen only in medical research. In fact, the notion of a randomized experiment has become a fundamental unifying concept for most applied econometric research. Even where random assignment is impractical, the notion of the experiment we’d like to run guides our choice of empirical questions and disciplines our use of non-experimental tools and data.

See an old post of mine titled “Of Gold Standards and Golden Means” for why I don’t think methods should be driving our choice of questions.

No related content found.