Skip to content

Fixing the Peer Review Process by Crowdsourcing It?

Try the following experiment. Take any article accepted for publication at any journal. Now  submit it to another journal. What are odds it will be accepted as is? Zero. There is even a pretty good chance it will be rejected. Our profession seemingly believes that its published articles are in fact not good enough to publish!

That’s from forthcoming editorial (link opens a .pdf) in the Review of Financial Studies by the Yale School of Management’s Matthew Spiegel.

Spiegel’s point is that editors and reviewers should stop chasing perfection. No paper is or will ever be perfect. For Spiegel, the real peer review process begins after an article has been published:

There is almost no reason to worry if a particular article is “right.” What horrors will befall us if a paper with a mistaken conclusion is published? Not many. The vast majority of articles are quickly forgotten. Who cares if their findings are accurate? The profession will not use the material regardless. What if an article is important — that is, people read and cite it? In that case, academics will dissect its every aspect. Some will examine the article’s data filters; others will check for coding errors; still others will look for missing factors or reverse causality explanations. The list is endless. But, that is the point. The list is endless. Authors, referees, and editors cannot even scratch the surface. Nor do they have to. The fact that our colleagues will stress test any important publication means our profession’s received canon of knowledge has a self-correcting mechanism built in. We have faith that important articles are “right” because their results have been tested over and over again in myriad ways.

A recent example of the vetting process described by Spiegel relevant to development economics is that of David Roodman and Jonathan Morduch failing to replicate failing to replicate earlier findings by Mark Pitt and Shahidur Khandker.

(HT: Gabriel Power.)