Skip to content

Category: Econometrics

Does Participation in Agricultural Value Chains Make Smallholders Better Off?

Yes, it does.

At least, that is my answer to the question in a new article of mine titled “As You Sow, So Shall You Reap: The Welfare Impacts of Contract Farming,” which is forthcoming in World Development.

More specifically, I try to estimate the causal impacts of participation in contract farming — the economic institution in which a processing firm contracts its production of agricultural commodities out to grower households, or the first link in an agricultural value chain — on the welfare of the smallholders.

The major difficulty with studying such problems is that households are not randomly assigned to the treatment (i.e., participants in contract farming) and control (i.e., nonparticipants in contract farming) groups.

The smallholders who choose to participate in agricultural value chains do so following systematic patterns. The problem is that the researcher has no idea what those patterns are, as they often involve variables that are unobserved.

For example, it could be that more entrepreneurial smallholders are less likely to participate in agricultural value chains because they have better options. Or it could be that smallholders who are risk-averse are more likely to participate in agricultural value chains because contract farming partially insures them against income risk. But if it is difficult to measure risk aversion, it is even more difficult to measure entrepreneurial ability.

Evaluating the Impact of Policies Using Regression Discontinuity Design, Part 2

I had a long post yesterday on regression discontinuity design (RDD), a statistical apparatus that allows identifying causal relationships even in the absence of randomization.

I split my discussion of RDD into two posts so as to respect my self-imposed rule #3 (“anything longer than 500 words, you split into two posts,” which constitutes an example of RDD in itself) but to make a long story short, the assumption made by RDD is that units of observation (e.g., children) immediately above and below some exogenously imposed threshold (e.g., the passing mark on an entrance exam for an elite school) are similar, so that comparing units immediately above and below that threshold allows estimating a causal effect (e.g., the causal effect of going to an elite school).

An RDD design is nice to have when eligibility for some treatment (e.g., going to an elite school) consists of a single threshold. Often, however, there will be multiple thresholds, which are aggregated into a single index without any clear idea as to what weight is given to each variable. So what are we to do in those cases?

Evaluating the Impact of Policies Using Regression Discontinuity Design, Part 1

Do students in smaller classes perform better than students in larger classes?

The answer might seem obvious. After all, students in smaller classes receive more attention from teachers, and so they should perform better.

We cannot know for sure, however, without looking at actual data on class size and student performance. In order to do so, we could collect data on student performance from various schools whose class sizes vary and look at whether students in smaller classes perform better.

But that wouldn’t be enough to determine whether smaller classes actually cause students to perform better. Correlation is not causation, and it could be the case that high-performing students are assigned to smaller classes composed of similar students. Thus, finding a correlation between class size and student performance would not be an indication that smaller classes cause students to perform better — only that school administrators want to put high-performing students in the same classes.

So how are we to know whether smaller classes actually cause students to perform better? One way could be to create classes of varying sizes (say, classes of 15, 30, 45, and 60 students) and randomly assign students to a given class size at the beginning of the year. Then, we could collect data on student performance on a standardized year-end exam and test whether average student performance is better in smaller than in bigger classes. Unfortunately, such a nice, clean experiment isn’t always feasible.