A few weeks ago, I received the galley proofs for my forthcoming paper in the American Journal of Agricultural Economics (AJAE) on price risk. Because the AJAE is just now transitioning from one publisher (Oxford University Press) to another (Wiley), and because I am one of four co-editors of the journal, this was a good occasion to go over some of the journal’s house rules for how papers look like in the journal.
One of the things that struck me as weird in the initial set of galley proofs that I received was that, fit those tables where all three of the usual symbols of statistical significance (i.e., *, **, and *** to denote statistical significance at less than the 1, 5, and 10 percent levels) were not used, the journal’s production team had seen fit to only list those symbols that were actually used in the table.
So for example, if a table reported findings that were significant at the 1 and 5 percent level, but did not report findings that were significant at the 10 percent level, the symbols ** and *** were defined in the table’s notes, but not the symbol *. Similarly, if a table reported a finding that was significant at the 5 percent level, but did not report findings that were significant at the 1 or 10 percent levels, the symbol ** was defined in the table’s notes, but not the symbols *** and *.
Presumably, the journal’s production team did that to save space–however infinitesimally little of it–on each page where a table appeared.
This struck me as counter to good statistical reporting practice: When looking at a table, we are no less interested in the dogs that didn’t bark than we are interested in the dogs that did bark. With table notes that define symbols in the usual way (i.e., defining *, **, and *** for coefficients significant at the 10, 5, and 1 percent levels), a coefficient without any stars next to it is understood not to be significant at any of those levels.
With a table only defines * and **, a busy reader (or a reader who is not as well-verse in statistics as most of the readers of this blog; say, a policy maker) will have no idea whether any of the coefficients significant at the 5 percent level are significant at the 1 percent level. In practice, the difference between a coefficient that is significant at the 5 percent level or at the 1 percent level can translate into decisions in which a policy maker or manager is respectively “pretty sure” or “almost certain,” and we should strive to be as clear as possible in how we define the results we report.
We have the social norms we have for good reasons. No matter how some people want to get rid of any talk of statistical significance,* the social norm scholars have settled on when reporting statistical results is to talk of the three usual levels of statistical significance. Defining only those symbols that appear in a table to save a small amount of journal page space can be misleading regarding what the authors chose to report, and it should be opposed whenever possible.
* I encourage those readers to read Ellickson’s Order without Law or his 1989 JLEO article for a good explanation of why we have the social norms that we have–and why the majority of those norms are not going away.