# “If You Can’t Control What’s Important, You Make Important What You Can Control”

More surprisingly, Posner spends significant firepower assailing
“The Bluebook: A Uniform System of Citation.” This compendium (The Chicago Manual of Style for lawyers) might seem an unworthy target. Yet he is excoriating not just the Bluebook, but also the substitution of style over substance it represents. When created in 1926, supposedly by the great appellate judge Henry Friendly, the manual was 26 pages. A recent edition spans 511 pages. Posner appears to believe that following the Bluebook is about as bad as rearranging deck chairs on the Titanic — and by reverse order of manufacture, no less. He casts the Bluebook as a neurotic reaction to external complexity; if you cannot control what is important, you make important what you can control.

From a New York Times review of Richard Posner’s new book, Reflections on Judging. I emphasized the last part because it was oddly reminiscent of a disturbing trend in my own discipline.

# A Nifty Fix for When Your Treatment Variable Is Measured with Error (Technical)

One of the advantages of having really smart colleagues — the kind who exhibit genuine intellectual curiosity, and who are truly interested in doing things well — is that you get to learn a lot from them.

I was recently having a conversation with my colleague and next-door office neighbor Joe Ritter in which we were discussing the possibility that the (binary) treatment variable in a paper I am working on might suffer from some misclassification. That is, my variable D = 1 if an individual has received the treatment and D = 0 otherwise, but it is possible that some people for whom D = 1 actually report D = 0, and that some people for whom D = 0 actually report D = 1.

When the possibility that my treatment variable might suffer from misclassification (or measurement error) arose, Joe recalled that he’d read a paper by Christopher R. Bollinger about this a while back. A few hours later, he sent me an email to which he’d attached the paper. Here is the abstract: Continue reading

# Love It or Logit, or: Man, People *Really* Care About Binary Dependent Variables

Last Monday’s post, in which I ranted a bit about the opposition to estimating linear probability models (LPM) instead of probits and logits, turned out to be very popular. In fact, that post is now in my top three most popular posts ever.

Last Monday morning, when my wife left for work, I told her I was expecting a meager number of page views that day given my choice of post topic. I was wrong: people really care about binary dependent variables. Continue reading

# A Rant on Estimation with Binary Dependent Variables (Technical)

Suppose you are trying to explain some outcome $y$, where $y$ is equal to 0 or 1 (e.g., whether someone is a nonsmoker or a smoker). You also have data on a vector of explanatory variables $x$ (e.g., someone’s age, their gender, their level of education, etc.) and on a treatment variable $D$, which we will also assume is binary, so that $D$ is equal to 0 or 1 (e.g., whether someone has attended an information session on the negative effects of smoking).

If you were interested in knowing what the effect of attending the information session on the likelihood that someone is a smoker, i.e., the impact of $D$ on $y$ The equation of interest in this case is Continue reading

# Economics, History, and Why Social Scientists Don’t Know Much About Anything

Economics and history have not always got on. Edward Lazear’s advice that all social
scientists adopt economists’ toolkit evoked a certain skepticism, for mainstream
economics repeatedly misses major events, notably stock market crashes, and rhetoric can be mathematical as easily as verbal. Written by winners, biased by implicit assumptions, and innately subjective, history can also be debunked. Fortunately, each is learning to appreciate the other … Each field has infirmities, but also strengths. We propose that their strengths usefully complement each other in untangling the knotty problem of causation.

This complementarity is especially useful to economics, where establishing what causes what is often critical to falsifying a theory. Carl (sic) Popper argues that scientific theory advances by successive falsifications, and makes falsifiability the distinction between science and philosophy. Economics is not hard science, but nonetheless gains hugely from a now nearly universal reliance on empirical econometric tests to invalidate theory. Edward O. Wilson puts it more bluntly: “Everyone’s theory has validity and is interesting. Scientific theories, however, are fundamentally different. They are designed specifically to be blown apart if proved wrong; and if so destined, the sooner the better.” Demonstrably false theories are thus pared away, letting theoreticians focus on as yet unfalsified theories, which include a central paradigm the mainstream of the profession regards as tentatively true. The writ of empiricism is now so broad that younger economists can scarcely imagine a time when rhetorical skill, rather than empirical falsification, decided issues, and the simplest regression was a day’s work with pencil and paper. Continue reading