## PSA: p-Values are Thresholds, Not Approximations

A post by Dave Giles reminded me of something important, which I once presumed everyone knew, but which the anecdote I’m about to recount clearly illustrates should be clearly taught to students.

But first, Dave’s post. It is titled “How (Not) to Interpret that p-Value,” links to another, older post where the author lists all the metaphors for coefficients that are not significant, but whose p-value is “close enough” to 0.10 (or 0.05, if you adopt a strict view that the 10 percent significance level is not significant enough). My own preferred expression, which I am sure I have used more than once, is “borderline significant,” but the author lists hundreds (yes, hundreds!) of such metaphors, among which “a considerable trend toward significance,” “approaches but fails to achieve a customary level of statistical significance,” “barely escapes being statistically significant at the 5% risk level,” “fell just short of the traditional definition of statistical significance,” “only slightly missed the conventional threshold of significance,” and so on. Like the author says: A result that is not statistically significant is still not statistically significant, no matter how you talk about it.

### p-Values are for Crossing from Above, Not Rounding Down

As for the anecdote, it goes as follows. I was once working with a coauthor and their grad student. Specifically, I was working with the grad student because my coauthor had a meeting that afternoon. The two of us were running some rough-cut regressions, taking a first stab at some data we had just received. As is often the case, we realized we had to cluster our standard errors at the relevant level. So we did that, and the coefficient of interest, which had hitherto been significant, now had a p-value of 0.102 because of the clustering.

That was when the grad student said: “Well it’s significant, but barely.” I asked the grad student to explain their reasoning, because I was curious to see what they saw that I wasn’t seeing. They then said “Well, the p-value rounds down to 0.10, right?” It was then that I had to tell the student that p-values are *thresholds*, not approximations, and that if a p-value is greater than 0.10, then the estimate is not significant at any of the conventional levels. Likewise, if a p-value is 0.051, the estimate is only significant at the 10 percent level, no matter how you want it to be significant at the 5 percent level. That was something which I thought was common knowledge, but which my interaction with the grad student clearly showed is not necessarily common knowledge. That is why the late Peter Kennedy’s *Guide to Econometrics* has always been my favorite econometrics text; in his book, Kennedy gives you both the intuition behind what you are doing and what is done as a matter of standard practice, and he keeps the technical details for technical appendices at the end of each chapter.