From a similarly titled post by Dave Giles:
- Always, but always, plot your data.
- Remember that data quality is at least as important as data quantity.
- Always ask yourself, “Do these results make economic/common sense”?
- Check whether your “statistically significant” results are also “numerically/economically significant”.
- Be sure that you know exactly what assumptions are used/needed to obtain the results relating to the properties of any estimator or test that you use.
- Just because someone else has used a particular approach to analyse a problem that looks like, that doesn’t mean they were right!
- “Test, test, test”! (David Hendry). But don’t forget that “pre-testing” raises some important issues of its own.
- Don’t assume that the computer code that someone gives to you is relevant for your application, or that it even produces correct results.
- Keep in mind that published results will represent only a fraction of the results that the author obtained, but is not publishing.
- Don’t forget that “peer-reviewed” does NOT mean “correct results”, or even “best practices were followed”.
All ten things are indeed important to keep in mind, but a variant of #6 particularly stands out as an early mistake of mine: in early versions of my job-market paper, I was using the same instrumental variable as Ackerberg and Botticini (2002) to control for endogenous matching.
The big difference was that Ackerberg and Botticini’s context is early Renaissance Tuscany (they indeed had historical data on the land tenancy contracts signed then and there), and mine was… 2004 rural Madagascar! Some of the rejections I got on my job-market paper were well-deserved, after all.
By the way, if you do applied work for a living (and I suspect many of you do), you should follow Dave Giles’ blog, which happens to be one of my one or three favorite economics blogs (the other ones being the Development Impact blog and Jayson Lusk’s blog), and which I discovered after it was recommended to me by my colleague Tim Beatty.