Economics, History, and Why Social Scientists Don’t Know Much About Anything

Economics and history have not always got on. Edward Lazear’s advice that all social
scientists adopt economists’ toolkit evoked a certain skepticism, for mainstream
economics repeatedly misses major events, notably stock market crashes, and rhetoric can be mathematical as easily as verbal. Written by winners, biased by implicit assumptions, and innately subjective, history can also be debunked. Fortunately, each is learning to appreciate the other … Each field has infirmities, but also strengths. We propose that their strengths usefully complement each other in untangling the knotty problem of causation.

This complementarity is especially useful to economics, where establishing what causes what is often critical to falsifying a theory. Carl (sic) Popper argues that scientific theory advances by successive falsifications, and makes falsifiability the distinction between science and philosophy. Economics is not hard science, but nonetheless gains hugely from a now nearly universal reliance on empirical econometric tests to invalidate theory. Edward O. Wilson puts it more bluntly: “Everyone’s theory has validity and is interesting. Scientific theories, however, are fundamentally different. They are designed specifically to be blown apart if proved wrong; and if so destined, the sooner the better.” Demonstrably false theories are thus pared away, letting theoreticians focus on as yet unfalsified theories, which include a central paradigm the mainstream of the profession regards as tentatively true. The writ of empiricism is now so broad that younger economists can scarcely imagine a time when rhetorical skill, rather than empirical falsification, decided issues, and the simplest regression was a day’s work with pencil and paper.

That’s from a new working paper (the link opens a .pdf) by Randall Morck and Bernard Yeung.

Having initially chosen to major in economics on the basis of having taken a few philosophy of science classes, this is a topic that is near and dear to my heart. And while I agree that economists have much to learn from other social scientists, I am not sure that many economic theories have been falsified on the basis of econometrics, no matter how solid the evidence. To see this, just think of how much of a rhetorical mess the debate on gun control has become, and ask yourself whether any amount of empirical evidence will change anyone’s mind on the topic.

In the general notes to chapter 1 of his Guide to Econometrics (p. 8), Kennedy has the following quotes:

Very little of what economists will tell you they know … has been discovered by running regressions. Regressions on government-collected data have been used mainly to bolster one theoretical argument over another. But the bolstering they provide is weak, inconclusive, and easily countered by someone else’s regressions (Bergmann, 1987, p. 192).

No economic theory was ever abandoned because it was rejected by some empirical econometric test, nor was a clear-cut decision between competing theories made in light of the evidence of such a test (Spanos, 1986, p. 660).

I invite the reader to try … to identify a meaningful hypothesis about economic behavior that has fallen into disrepute because of a formal statistical test (Summers, 1991, p.130).

Now, you might argue that those quotes precede the “credibility revolution” in applied microeconomics. Fair enough. But first, that reaction would be an example of the cognitive bias known as chronocentrism. And second, many of today’s most “credible” studies trade internal validity for external validity. The findings of a randomized controlled trial evaluating the welfare impacts of microfinance in Uganda, for example, will not generalize to a similar program in neighboring Rwanda.

Even the findings of famous quasi-experimental (i.e., instrumental variables-based) studies that were thought to have a great deal of external validity, such as Angrist’s (1990) study of the impacts of education on wages and Acemoglu et al.’s (2001) study of the impacts of colonial institutions on present-day economic performance, have been put into question.

In his 2008 book Putting Econometrics in Its Place, Swann made a point similar to that made by Morck and Yeung in their study. Here’s Kennedy (2008, p.7) again:

Swann complains that econometrics has come to play a too-dominant role in applied economics; it is viewed as a universal solvent when in fact it is no such thing. He argues that a range of alternative methods, despite their many shortcomings should be used to supplement econometrics. In this regard he discusses at length the possible contributions of experimental economics, surveys and questionnaires, simulation, engineering economics, economic history and the history of economic thought, case studies, interviews, common sense and intuition, and metaphors. Each of these, including econometrics, has strengths and weaknesses. Because they complement one another, however, a wise strategy would be to seek information from as many of these techniques as is feasible.

Lastly, although this post rags on economists, I don’t know that anyone in the social sciences is guilt-free. Everyone has his or her favorite method which he or she uses at the expense of other methods, and I am just as guilty of this as the next social scientist.

HT: Tom Pepinsky, in a post on his blog Indolaysia.

No related content found.

One comment

  1. GP

    I’m glad to see Summers (1991) cited. That is the paper that came to mind when I started reading your post. Though he writes about macroeconomics, he raises many important issues that continue to be relevant for social science today. (Link below)

    I would add that it is unfortunate that the authors use the uninformative (and to some, pejorative) hard/soft science dichotomy, when the experimental/observational data distinction is available, informative, and nonjudgmental.