Over the last few years, index insurance has been receiving an increasing amount of attention from researchers and policy makers.
Whereas regular insurance pays out when a verifiable loss is incurred (e.g., flood insurance pays out when there has been a flood), whether an index insurance pays out depends on whether some index crosses a certain threshold. So for example, a rainfall index insurance for the agricultural producers in a given region would pay out when growing conditions in that region are too dry, i.e., when rainfall falls below a specific, predetermined threshold.
The beauty of index insurance is that it greatly reduces the scope for moral hazard. Indeed, if I insure your crop, you might well decide to neglect your field, do nothing for the entire season, and wait for me to give you a payout. Not so with index insurance, since the index (e.g., rainfall, temperature, etc.) is typically very difficult to manipulate.
I was not planning on blogging about this, but an email last week from my colleague Nicholas Magnan telling me he wanted to run the Trading Game — a simple in-class experiment I run with the students in my principles of microeconomics class every year to show them that trade leaves no one worse off — in his own classes and asking me whether I had written anything about this made me realize I should probably share this with other teachers of economics.
Protocol
The Trading Game is pretty simple. Before the start of every semester I have to teach principles of microeconomics, I look at the number of students enrolled in my class, and I head out to the nearest dollar store to buy an equal amounts of trinkets.
As luck would have it, WikiMedia Commons has a picture of the very place in Durham where I buy all of my Trading Game trinkets:
The trinkets I buy are all in the $1-to-$3 range, and they consist largely of toys. This year’s trinket harvest yielded a Toys (as in the movie) puzzle, glow sticks, Donald Duck stickers, fake tattoos, miniature plastic animals, toy dinosaurs, etc. For a group of 50 student, I usually spend no more than $100 of the allocation I receive for my course.
Then, when I want to run the Trading Game in the wake of teaching students about how trade can make everyone better off in context of chapter 1 of Mankiw’s Principles of Microeconomics, I go around allocating trinkets to students at random.
I then ask students to assign a value to the trinket they have just received ranging from 0 to 10, with higher values meaning cooler trinkets.
We then go around the room recording those values. Because students often bring their laptops to lecture, it is easy to find a volunteer to record those values, but you can have a teaching assistant do it. Once all values are recorded, total welfare (i.e., the sum total of the values students assign to their trinkets) is announced.
I then tell students that they have five minutes to trade voluntarily between themselves, insisting on the fact that trades must be voluntary (i.e., no stealing) and cannot involve dynamic aspects, or credit (i.e., no “I’ll give you my cool dinosaur if you give me your awful trinket and you buy drinks on Friday night.”)
Once students are done trading, we once again go around the room recording the values they assign to their trinkets. Once all values are recorded, total welfare is announced once again.
And that’s usually where the magic happens. When I ran the Trading Game last week, my class’ “aggregate welfare” went from 128 to about 180, if I recall correctly, and you could just see that it had become obvious to students that (in this context of well enforced property rights) trade not only left no one worse off, but it increased aggregate welfare.
If I’d wanted to do things more convincingly, I would’ve asked the student who recorded values in a spreadsheet to test whether the two values were statistically different from one another.
I cannot take credit for the Trading Game, as I first learned about it in 1999, when I played it at a colloquium for student leaders organized by a Canadian free-market think-tank (yes, those actually exist).
Last week I wrote two posts about the local average treatment effect (LATE). Click here for part 1, and here for part 2, in which I respectively discuss the difference between the ATE and the LATE, and the difficulty of comparing results across studies if different studies rely on different instrumental variables (IV).
This brings me to the topic of this post. After I posted part 2 last week, a reader — an economist who has been out of school for some time — emailed me with the following:
I can’t recall learning about this while in grad school. Surely it was mentioned and it’s just receded into a dark corner of my memory? It seems like a pretty important concept to consider, although I guess it’s a bigger concern for experimental economics?
The emphasis is mine. An equally emphatic answer would be: “No, it’s actually a huge problem with nonexperimental data.”
Wages, Education, and the Vietnam War
To see this, consider the classic IV example — Angrist’s (1990) study of the impact of education on wages. Because wages and education are jointly determined — if anything, there is reverse causality because people choose to go to spend time in school based on the expectation of a higher wage — Angrist used a respondent’s Vietnam draft lottery number as an IV for the respondent’s education.