Skip to content

Category: Miscellaneous

Fixing the Peer Review Process by Crowdsourcing It? (Continued)

We call the fallout to any article the “comments,” but since they are often filled with solid arguments, smart corrections and new facts, the thing needs a nobler name. Maybe “gloss.” In the Middle Ages, students often wrote notes in the margins of well-regarded manuscripts. These glosses, along with other forms of marginalia, took on a life of their own, becoming their own form of knowledge, as important as, say, midrash is to Jewish scriptures. The best glosses were compiled into, of course, glossaries and later published — serving as some of the very first dictionaries in Europe.

Any article, journalistic or scientific, that sparks a debate typically winds up looking more like a good manuscript 700 years ago than a magazine piece only 10 years ago. The truth is that every decent article now aspires to become the wiki of its own headline.

Sure, there is still the authority that comes of being a scientist publishing a peer-reviewed paper, or a journalist who’s reported a story in depth, but both such publications are going to be crowd-reviewed, crowd-corrected and, in many cases, crowd-improved. (And sometimes, crowd-overturned.) Granted, it does require curating this discussion, since yahoos and obscenity mavens tend to congregate in comment sections.

That’s from a New York Times op-ed in last weekend’s Sunday Review by Jack Hitt, who is also a frequent contributor to This American Life (here is my favorite This American Life story by Jack Hitt).

Hitt’s point should be be taken more seriously by academics. In all fairness, however, in some corners of academia, the idea is being taken seriously: the AEJs — the four new journals of the American Economic Association — have comments section for every published article (I don’t know why the AEA has not also done so for its flagship journal, the American Economic Review.)

Unfortunately, readers of the AEJs seem to be slow to embrace that change, as few articles appear to have garnered any comments. Moreover, a quick look at the latest issue of each AEJ indicates no comments at all. Perhaps the problem is that one needs to be a member of the AEA to comment.

If those comments thread ever take off, and if other journals start offering similar comment sections, this would be a cheap, quick way of building canonical knowledge within any discipline, as I discussed in my previous post on this topic.

Fixing the Peer Review Process by Crowdsourcing It?

Try the following experiment. Take any article accepted for publication at any journal. Now  submit it to another journal. What are odds it will be accepted as is? Zero. There is even a pretty good chance it will be rejected. Our profession seemingly believes that its published articles are in fact not good enough to publish!

That’s from forthcoming editorial (link opens a .pdf) in the Review of Financial Studies by the Yale School of Management’s Matthew Spiegel.

Spiegel’s point is that editors and reviewers should stop chasing perfection. No paper is or will ever be perfect. For Spiegel, the real peer review process begins after an article has been published:

There is almost no reason to worry if a particular article is “right.” What horrors will befall us if a paper with a mistaken conclusion is published? Not many. The vast majority of articles are quickly forgotten. Who cares if their findings are accurate? The profession will not use the material regardless. What if an article is important — that is, people read and cite it? In that case, academics will dissect its every aspect. Some will examine the article’s data filters; others will check for coding errors; still others will look for missing factors or reverse causality explanations. The list is endless. But, that is the point. The list is endless. Authors, referees, and editors cannot even scratch the surface. Nor do they have to. The fact that our colleagues will stress test any important publication means our profession’s received canon of knowledge has a self-correcting mechanism built in. We have faith that important articles are “right” because their results have been tested over and over again in myriad ways.

A recent example of the vetting process described by Spiegel relevant to development economics is that of David Roodman and Jonathan Morduch failing to replicate failing to replicate earlier findings by Mark Pitt and Shahidur Khandker.

(HT: Gabriel Power.)

Why Not Just Give Money to Charity?

It’s tax season, so my wife and I have recently had to compile how much we have given to charity in 2011. For those of you who do not live in the US, this is because money given to charity is nontaxable here in the US, which might go a long way toward explaining why Americans give more to charity than the citizens of any other country.

At a little over 0.7 percent of our total household taxable income given to charity, we have given a lot less than I expected. Sure, my wife dedicates some of her time every weekend to volunteering at our local animal shelter, but ideally, I would like to see our household’s charitable giving increased to at least 2.5 percent of our income next year.

Oddly enough, many people are reticent to giving any money to charity. Development blogger extraordinaire Alanna Shaikh (if you are interested in getting a job in development, you do subscribe to her International Development Career List, right?) explains why — and why you should give money to charity: