Skip to content

A Nifty Fix for When Your Treatment Variable Is Measured with Error (Technical)

One of the advantages of having really smart colleagues — the kind who exhibit genuine intellectual curiosity, and who are truly interested in doing things well — is that you get to learn a lot from them.

I was recently having a conversation with my colleague and next-door office neighbor Joe Ritter in which we were discussing the possibility that the (binary) treatment variable in a paper I am working on might suffer from some misclassification. That is, my variable D = 1 if an individual has received the treatment and D = 0 otherwise, but it is possible that some people for whom D = 1 actually report D = 0, and that some people for whom D = 0 actually report D = 1.

When the possibility that my treatment variable might suffer from misclassification (or measurement error) arose, Joe recalled that he’d read a paper by Christopher R. Bollinger about this a while back. A few hours later, he sent me an email to which he’d attached the paper. Here is the abstract:

In this paper I examine identification and estimation of mean regression models when a binary regressor is mismeasured. I prove that bounds for the model parameters are identified and provide simple estimators which are consistent and asymptotically normal. When stronger prior information about the probability of misclassification is available, the bounds can be made tighter. Again, a simple estimator for these cases is provided. All results apply to parametric and nonparametric models. The paper concludes with a short empirical example.

In other words, suppose you get a coefficient estimate b > 0 from a regression of Y on D and D is misclassified, Bollinger’s method allows you to put bounds a and c on b such that a < b and c < b. Better yet, if you actually know (or have a good idea of) the rates of misclassification, you can get even better bounds — something like d and e, where a < d < b < f < c.

You can find Bollinger’s paper here. It’s an old paper, but it contains such a nifty fix that it was too good not to share (I was surprised that it hasn’t been cited more often) given that misclassification is an issue that arises often enough.