Last updated on March 30, 2014
On the Worthwhile Canadian Initiative blog, Frances Woolley had a good post about why beginner econometricians get so worked up about the wrong things:
[I]t is rare that I will have someone come to my office hours and ask “Have I chosen my sample appropriately?” Instead, year after year, students are obsessed about learning how to use probit or logit models, as if their computer would explode, or the god of econometrics would smite them down, if they were to try to explain a 0-1 dependent variable by running an ordinary least squares regression.
I try to explain: “Look, it doesn’t matter. It doesn’t make much difference to your results. It’s hard to come up with an intuitive interpretation of what logit and probit coefficients mean, and it’s a hassle to calculate the marginal effects. You can run logit or probit if you want, but run a linear probability model as well, so I can tell whether or not anything weird is going on with the regression.”
But they just don’t believe me.
I am happy to concede to Dave Giles that, all else being equal, it is better to use probit than ordinary least squares, and that Stata’s margins command is not that difficult for an undergraduate to use.
But all else is not equal. Using probit will not save a regression that combines men and women together into one sample when estimating the impact of having young children on the probability of being employed, and fails to include a gender*children interaction term.
Indeed, nothing screams “GRAD STUDENT!!!” louder than an obsession with fancy estimators — usually of the maximum likelihood variety, so probit, logit, tobit, etc., sometimes of the Bayesian variety — instead of with whether one has reasonably identified one’s parameter of interest (via a research design that relies on a plausibly exogenous source of variation), or with whether one’s findings have some reasonable claim at being externally valid (via the use of a representative sample).
There is an unspoken ontological order of importance to things in applied work, which unfortunately goes unspoken in most econometrics classes. That order is roughly as follows:
- Internal validity: Is your parameter of interest credibly identified? In other words, are you estimating a causal relationship, or are you merely dealing with a correlation? If the latter, how close can you get to estimating a causal relationship with the best available data and methods?
- External validity: Are your findings applicable to observations outside of your sample? Why or why not?
- Precision: Are your standard errors right? Have you accounted for things like heteroskedasticity? Did you cluster your standard errors at the right level?
- Data-generating process: Did you properly model the DGP? For example, does your estimation procedure account for the fact that, say, your dependent variable is a positive integer, which would require a Poisson or negative binomial regression?
Look, I realize getting the right standard errors is important. But more important than internal validity?
Likewise, I realize that it is important to account for the fact that your dependent variable is ordered and categorical, but with 150 observations, you’re better off relying on a good research design and using a linear regression than a likelihood-based procedure (which is asymptotically consistent; n=150 does not count as asymptotic), especially if you have any claim at informing policy or learning something about individual behavior.