A Nifty Fix for When Your Treatment Variable Is Measured with Error (Technical)

One of the advantages of having really smart colleagues — the kind who exhibit genuine intellectual curiosity, and who are truly interested in doing things well — is that you get to learn a lot from them.

I was recently having a conversation with my colleague and next-door office neighbor Joe Ritter in which we were discussing the possibility that the (binary) treatment variable in a paper I am working on might suffer from some misclassification. That is, my variable D = 1 if an individual has received the treatment and D = 0 otherwise, but it is possible that some people for whom D = 1 actually report D = 0, and that some people for whom D = 0 actually report D = 1.

When the possibility that my treatment variable might suffer from misclassification (or measurement error) arose, Joe recalled that he’d read a paper by Christopher R. Bollinger about this a while back. A few hours later, he sent me an email to which he’d attached the paper. Here is the abstract:

In this paper I examine identification and estimation of mean regression models when a binary regressor is mismeasured. I prove that bounds for the model parameters are identified and provide simple estimators which are consistent and asymptotically normal. When stronger prior information about the probability of misclassification is available, the bounds can be made tighter. Again, a simple estimator for these cases is provided. All results apply to parametric and nonparametric models. The paper concludes with a short empirical example.

In other words, suppose you get a coefficient estimate b > 0 from a regression of Y on D and D is misclassified, Bollinger’s method allows you to put bounds a and c on b such that a < b and c < b. Better yet, if you actually know (or have a good idea of) the rates of misclassification, you can get even better bounds — something like d and e, where a < d < b < f < c.

You can find Bollinger’s paper here. It’s an old paper, but it contains such a nifty fix that it was too good not to share (I was surprised that it hasn’t been cited more often) given that misclassification is an issue that arises often enough.

No related content found.

3 comments

  1. Heather

    Does “when stronger prior information about the probability of misclassification is available” basically translate to “when better monitoring data are collected and made available?”

    Given all the buzz right now about monitoring and eval, this might be a worthwhile point to note — how do you get that stronger prior info?

  2. Marc F. Bellemare

    Thanks for your question, Heather. Indeed, monitoring can definitely help improve your knowledge of who has really been treated or not instead of what enumerators (or people themselves!) report.

    In certain cases, you can get stronger priors from the extant literature. On female genital cutting, for example, you may ask women what type of FGC they’ve undergone, but that’s measured with error, and there are good articles recording the percentage of misclassification for specific populations. But yes, in an RCT setup, this can be used to bound effects when there’s confusion about whether people have been treated, or when there’s imperfect implementation.

  3. Pingback: A Nifty Fix for When Your Treatment Variable Is...