One thing I often see authors doing in some of the papers I get to handle as an editor or comment on as a reviewer is using a Durbin-Wu-Hausman test–for the sake of brevity, I will just say “Hausman test” throughout this post–to test for exogeneity. The idea for this post came from my reading of Jeff Woolridge’s article on the control function approach (more on that approach in a moment, and yes, the same Wooldridge who wrote what is perhaps the best microeconometrics text on the market) in the latest issue of JHR.
Typically, this is done in an effort to argue that some variable of interest is not really endogenous to the outcome of interest, and it proceeds as follows (note that I am describing a situation where researchers have to rely on observational data):
- The authors are interested in the (causal) impact of X on Y, but they can already anticipate their reviewers’ comment that X is endogenous to Y. Wishing to deflect that comment, they decide to test for the exogeneity of X.
- The authors start by estimating a simple specification of their equation of interest that treats X as exogenous.
- The authors then find an instrumental variable Z–typically not a great one–from among the set of all variables in their data set. They then estimate a specification of their equation of interest that instruments X with Z.
- Finally, they compare both sets of estimates with a Hausman test.
Recall that the null hypothesis of the Hausman test is that all is well with the world. That is, that correcting for their variance, estimates do not differ between the two specifications. Put differently, the null of the Hausman test is that X is exogenous to Y.
And believe me, this is usually used to claim that X is indeed exogenous to Y. But here is what is (typically) wrong with this picture:
- Even if you fail to reject the null of exogeneity, how convincing is this, really? Depending on the level of confidence you choose to go with, 90, 95, or 99 percent of the probability mass rests on the null, i.e., failing to reject the null is what you would expect in most cases. So much like a null result is typically unconvincing and has to be extremely important as well as shown a myriad different ways before it can be published, and much like we never accept a null hypothesis–at best, you fail to reject it–a failure to reject the null of exogeneity in a Hausman test is not a blank check to “accept” exogeneity. It is a small bit of information, and not a very good one at that.
- Usually, this testing strategy is used in papers where the authors would rather claim exogeneity of X because they don’t have a great identification strategy, e.g., they don’t have a good instrumental variable (i.e., one that is not valid, no matter its relevance) to identify the causal effect of X. But as with almost everything else in econometrics, the GIGO principle applies: Garbage in, garbage out. If you use a bad instrument for your Hausman test (i.e., one that does not meet the exclusion restriction), what do you learn from your test? The answer is “not much.”
- If the authors have a good instrumental variable, i.e., one that is relevant and valid, even if they fail to reject the null of the Hausman test that X is exogenous, they will typically show both the OLS and IV results side-by-side, because reviewers will want to see them. Besides, if you have a good IV, you rarely will test for exogeneity, and if your estimated coefficients happen to be very similar, you will just make note of it. (UPDATE: I knew I had forgotten something, and Nicolas Van de Sijpe reminded me of it: When you fail to reject the null of exogeneity with a valid IV, you would want to go with the OLS results because under the null, OLS is efficient.)
I promised I would talk about the control function approach. Wooldridge’s excellent article is an excellent review of the approach for people who, like me, are not terribly familiar with it and tend to stick to rudimentary methods (i.e., linear models).
The article did make me realize what the advantage of the control function approach is, however, when using linear models: It allows running a heteroskedasticity-robust Hausman test. But when I read that, I thought: “Great, but what gives? If you have a good IV, would you really care what the Hausman test tells you anyway?” And that’s the thing with good IVs: Use’em if you got’em.
Now, lest you think I view Hausman tests as useless, I happen to think Hausman test certainly have their use, and I have used them very recently in my own work. The context was an experimental research project I will be blogging about later this week, and where we had experimental subjects play 20 rounds of the same game.
In that case, because the variables of interests were randomly assigned by us, it intuitively made sense to use random effects estimators, which would allow us to retain subject-specific controls (e.g., their age, gender, ethnicity, etc.) in our analysis. Recall that random effects works well when you know your right-hand side variables to be exogenous–it doesn’t get much more exogenous than when it’s randomly assigned by the experimenter–because the random effects estimator is efficient.
Still, we decided to run Hausman tests to test the validity of our random effects approach relative to a fixed effects approach, because in that case, it made sense to do so: a rejection of the null would have been an indication that, no matter how better the random effects estimator was in principle, the data told us to estimate a fixed effects estimator.