Skip to content

Category: Land

Yes to Land Rights, but Land Titles Are No Silver Bullet

Some economists argue that ensuring people have titles to their land can ensure a feeling of security and boost production. … The greatest proponent of the argument is Hernando de Soto, a development economist who has managed to win praise from the likes of Bill Clinton and the libertarian Cato Institute.

There is plenty of evidence that land rights are connected to productivity, but new research out of Madagascar shows that it is not always the case.

Duke University researcher Marc F. Bellemare tested whether the land rights component of a $100 million Millennium Challenge Corporation (MCC) compact with the government of Madagascar. He found that the provision of formal land rights, meaning land titles, had not measurable impact on productivity when comparing farmers that did and did not benefit from the MCC compact.

Holding a land title is not sufficient if structures are not in place to enforce land ownership and dole it out.

From a very nice article by Tom Murphy on Humanosphere, which discusses the policy implications of my forthcoming Land Economics article on land rights in Madagascar.

Do Land Titles Increase Agricultural Productivity?

Not everywhere:

This paper studies the relationship between land rights and agricultural productivity. Whereas previous studies used proxies for soil quality and instrumental variables to control for the endogeneity of land titles, the data used here include precise soil quality measurements, which in principle allow controlling for the unobserved heterogeneity between plots. Empirical results suggest that formal land rights (i.e., land titles) have no impact on productivity, but that informal land rights (i.e., landowners’ subjective perceptions of what they can and cannot do with their plots) have heterogeneous impacts on productivity.

That’s the abstract of my paper titled “The Productivity Impacts of Formal and Informal Land Rights: Evidence from Madagascar,” which has just been accepted for publication in Land Economics.

The paper is notable for a few things. First, it shows that land titles have no impact on agricultural productivity in Madagascar, a country where the US government had planned on spending $110 million dollars on various initiatives aimed at “assisting the rural population to transition from subsistence agriculture to a market economy,” including via land titling.

Replication, Publication Bias, and Negative Findings

I came across fascinating read on some of the important problems that plague the scientific process in the social sciences and elsewhere. From an article by Ed Yong in the May 2012 edition of Nature:

Positive results in psychology can behave like rumours: easy to release but hard to dispel. They dominate most journals, which strive to present new, exciting research. Meanwhile, attempts to replicate those studies, especially when the findings are negative, go unpublished, languishing in personal file drawers or circulating in conversations around the water cooler. “There are some experiments that everyone knows don’t replicate, but this knowledge doesn’t get into the literature,” says Wagenmakers. The publication barrier can be chilling, he adds. “I’ve seen students spending their entire PhD period trying to replicate a phenomenon, failing, and quitting academia because they had nothing to show for their time.” (…)

One reason for the excess in positive results for psychology is an emphasis on “slightly freak-show-ish” results, says Chris Chambers, an experimental psychologist at Cardiff University, UK. “High-impact journals often regard psychology as a sort of parlour-trick area,” he says. Results need to be exciting, eye-catching, even implausible. Simmons says that the blame lies partly in the review process. “When we review papers, we’re often making authors prove that their findings are novel or interesting,” he says. “We’re not often making them prove that their findings are true.”

I have briefly discussed the lack of replication in economics here, but in short, the issue is that once a finding is published, there are practically no incentives for people to replicate those findings.

There are two reasons for this. The first is that journals tend to want to publish only novel results, so even if you manage to confirm someone else’s findings, there will be few takers for your study unless you do something significantly different… in which case you’re no longer doing replication.

The second is the tendency to publish only studies in which the authors find support for their hypothesis. This is known as “publication bias.”

For example, suppose I hypothesize that the consumption of individuals increases as their income increases, and suppose I find support for that hypothesis using data on US consumers. This result eventually gets published in a scientific journal. Suppose now that you decide to replicate my finding using Canadian data and you fail to replicate my findings. Few journals would actually be interested in such a finding. That’s because failing to reject the null hypothesis in a statistical test is not surprising (after all, you’ve staked 90, 95, or 99 percent of the probability mass on the null hypothesis that consumption is not associated with income), but also because, as Yong’s article highlights, that would not exactly be an “exciting, eye-catching” result.

I am currently dealing with such a “negative finding” in one of my papers, in which I find that land titles do not have the positive impact on productivity posited by the theoretical literature in Madagascar, a context where donors have invested hundreds of millions of dollars in various land titling policies. Perhaps unsurprisingly, the paper has proven to be a very tough sell.

(HT: David McKenzie.)