Skip to content

Three Resources on Refereeing

Last updated on March 30, 2014

Some of my less visible roles as an academic are that of associate editor at the American Journal of Agricultural Economics and at Food Policy and that of referee for a number of journals in any given year.

I made a list of my 20 rules for refereeing a few years ago; I still stand by most if not all of those rules (as an associate editor, I especially emphasize the first three rules!), and I am planning on writing a post discussing what I have learned from handling manuscripts as an associate editor sometime in the near future. For now, however, I wanted to draw the reader’s attention to three refereeing resources, which my friend and grad-school colleague Gabriel Power passed along a few weeks ago:

  1. Referee Recommendations,” a paper by Ivo Welch. Abstract: “This paper analyzes referee recommendations at the SFS Cavalcade, where a known algorithm matched referees to submissions, and at eight prominent economics and finance journals (ECMTA, JEEA, JET, QJE, IER, RAND, JF, RFS). The behavior of referees was similar in all venues. The referee-specific component was about twice as important as the common component. Referees differed both in their scales (some referees were intrinsically more generous than others) and in their opinions of what a good paper was (they often disagreed about the relative ordering of papers). My paper quantifies these effects.”
  2. John G. Lynch’s presidential address to the Association for Consumer Research, in which he discusses refereering. Money quote: “My single biggest motivation in wanting to talk about reviewing is my perception that so much of it is badly done. I know that it is heresy to say so. Editors always praise the insightful contributions of their editorial boards and their ad hoc reviewers. Authors always talk graciously about how much reviews, even critical ones, helped their papers. And I’ve been the beneficiary of some really insightful reviews over the years. But from working in this field for 18 years and from seeing reviews of a couple hundred JCR and JCP manuscripts, my observation is that the average reviewer is mediocre at best (present company excepted, of course).”
  3. A paper by J. Scott Armstrong titled “Peer Review for Journals: Evidence on Quality Control Fairness, and Innovation.” Abstract: “I reviewed the published empirical evidence concerning journal peer review, which consisted of 68 papers, all but three published since 1975. Peer review improves quality, but its use to screen papers has met with limited success. Current procedures to assure quality and fairness seem to discourage scientific advancement, especially important innovations, because findings that conflict with current beliefs are often judged to have defects. Editors can use procedures to encourage the publication of papers with innovative findings such as invited papers, early-acceptance procedures, author nominations of reviewers, results-blind reviews, structured rating sheets, open peer review, and, in particular, electronic publication. Some journals are currently using these procedures. The basic principle behind the proposals is to change the decision from whether to publish a paper to how to publish it.”