I had just finished my Masters in Economics at the Université de Montréal in December 2000 when the Québec Ministry of International Relations announced that it was funding an internship at the International Fund for Agricultural Development (IFAD), one of the three Rome-based development agencies of the United Nations.
Knowing I was going to start a PhD in agricultural and applied economics the following fall, I applied and eventually got the internship. But one thing that struck me from the beginning — from my initial interview with Ministry of International Relations officials, that is — was the emphasis on “sustainable” development.
Wikipedia defines sustainable development as
a pattern of economic growth in which resource use aims to meet human needs while preserving the environment so that these needs can be met not only in the present, but also for generations to come.
The adoption of sustainable development policies is a laudable goal, but given that we often have a hard time knowing whether specific development interventions actually “work,” I suspect it’s even more difficult to know whether specific development interventions (i) actually work and (ii) preserves the environment.
That Thing Called Causality
Any social scientist worth his or her salt knows how difficult it is to make causal statements, i.e., to test whether a variable D (e.g., some development intervention) causes increases in another variable Y (e.g., welfare).
The difficulty usually arises because of the presence of confounding variables. Some of those confounding variables can be measured, but it is almost always the case that some confounders go unmeasured, which compromises the identification of causal relationships.
To solve the identification problem, social scientists rely on experimental or quasi experimental research designs. That is, setups in which D is assigned randomly or in which some plausibly exogenous source of variation is used to make D as good as random.
Causality Squared
But even with an experimental or quasi experimental design, it can be difficult to identify whether an increase in X at time T causes a change in Y at time T+1. So how do we know whether (i) the same change will be maintained at, say, T+50 (i.e., whether the intervention still works) and (ii) both the increase in X and the change in Y have preserved the environment (i.e., whether it is sustainable).
In other words, it is difficult enough to know whether something works in a cross-section, how are we to know whether it will preserve the environment in the future? How distant of a future should we be considering, exactly? What assumptions should we make about the fundamentally unpredictable future? And how are we to rule out potential the kind of negative feedback that would undermine the environment in other ways?
There are likely many people working in development nowadays who “just know” that the interventions they propose or are working on are sustainable, much like there were many people working in development 10 or 15 years ago who “just knew” that the interventions they proposed or were working on “worked.”
In other words, the identification problem is about a hundred times worse when one starts considering the future, and my hunch is that “sustainable development” is just a buzzword. There is clearly a case for more T in experiments.
Update: Of course, this post says nothing about learning about sustainability from the past. The point of this post was ex ante — not ex post — sustainability.
Medical Analogies in the Social Sciences
Andrew Gelman writes:
Social scientists who use medical analogies to explain causal inference are, I think, implicitly trying to borrow some of the scientific and cultural authority of that field for [their] own purposes.
Social scientists are often tempted to illustrate their ideas with examples from medical research. When it comes to medicine, though, we are, with rare exceptions, at best ignorant laypersons (in my case, not even reaching that level), and it is my impression that by reaching for medical analogies we are implicitly trying to borrow some of the scientific and cultural authority of that field for our own purposes. Evidence-based medicine is the subject of a large literature of its own.
Gelman’s post is a contender for the Post with the Longest Title 2012 award (the title of the post is indeed “Social scientists who use medical analogies to explain causal inference are, I think, implicitly trying to borrow some of the scientific and cultural authority of that field for our own purposes.”)
I wonder if he got his inspiration from the Red Sparowes, whose song titles are known for being long (“the great leap forward poured down upon us one day like a mighty storm, suddenly and furiously blinding our senses,” off of their album Every Red Heart Shines Toward the Red Sun.)