17
Jan 18

Between the Introduction and the Conclusion: The “Middle Bits” Formula for Applied Papers

Last Friday, Chris Goodman tweeted about the “conclusion formula” I wrote a few years ago, about how to write the conclusion of a standard paper in economics. Rob Greer then responded as follows:

Rob most likely meant to joke, but there actually is such a thing as a formula for the so-called “middle bits”–at least for the kind of paper I usually write.

Let’s look into the outline of the typical paper. When I write a new paper, the first thing I do in LaTex is to create the following sections:

  1. Introduction
  2. Theoretical Framework
  3. Empirical Framework
  4. Data and Descriptive Statistics
  5. Results and Discussion
  6. Conclusion

Continue reading →


15
Jan 18

‘Metrics Monday: Useless Hausman Tests

Per Wikipedia, recall that the Durbin-Wu-Hausman test (hereafter the Hausman test)

evaluates the consistency of an estimator when compared to an alternative, less efficient estimator which is already known to be consistent.

One common way in which the Hausman test is used is to compare OLS with 2SLS–that is, to perform a test of the null of exogeneity. This tests consists in estimating an OLS specification, estimating a 2SLS specification of the same equation, and then in comparing whether the two parameter vectors are statistically the same. If you fail to reject the null of exogeneity, OLS is to be preferred. If you reject the null, then 2SLS is to be preferred. Continue reading →


08
Jan 18

‘Metrics Monday: The Dogit Model

Even if you know only a little about discrete-choice models, you know that when comparing multinomial alternatives, or un-ordered categories– for instance, the choice to drive, take the bus, bike, or walk to work, or the decision to pick a major in arts and sciences, business, engineering, etc.–the go-to model is the multinomial logit (MNL), which is a version of the logit that allows comparing more than two alternatives without there being any order between those alternatives. And if you’ve studied the MNL, you know that its main drawback is the fact that it assumes that the independence of irrelevant alternative (IIA) assumption holds.

In the context of of the mode of transportation you choose to go to work, the classic example of IIA is this: Suppose you face a choice between driving or taking a red bus to work, each with equal likelihood, i.e., 0.50-0.50. Introducing a blue bus as a third alternative should not affect your likelihood of driving to work–your choice to drive should be independent from the irrelevant alternatives blue bus or red bus. If the introduction of the blue bus changes the likelihood you’ll drive in any way, then the IIA does not hold.

There are many contexts where the IIA cannot be argued to hold. Perhaps the simplest case is Condorcet’s paradox: A group of voters might have a clear preference between candidates A and B, but introducing a third candidate may well make their preferences cyclical.

Some well-known alternatives to the MNL relaxing the IIA assumption are the generalized extreme value (GEV) model and the multinomial probit (MNP) model, but both the GEV and MNP models often make undesirable assumptions or are difficult to estimate–the MNP, for example, involves integrating over an N-variate normal distribution, something which gets computationally intensive beyond the bi- or trivariate cases.

One alternative which I suspect only one or two of you have ever heard of is the dogit (pronounced “dodge it”; I’ll get to why in a minute), which offers a nice alternative to MNL in that it allows relaxing the independence of irrelevant alternative assumption fully or only partially.

This is not a post about Fox Mulder’s replacement.

The dogit model was introduced by Gaudry and Dagenais in a 1977 article in Transportation Research B (a lot of the early microeconometric models were developed by transportation economists, given the categorical nature of many transportation choices).* They called it “dogit” because, according to the first footnote of the paper (the emphasis is mine):

The model avoids or dodges the researcher’s dilemma of choosing a priori between a format which commits to IIA restrictions or one which excludes them–whence its name.

The cool thing is that the dogit model allows the data to speak for itself when it comes to the IIA.

Another cool thing is that it readily allows for a certain amount of captivity to a given choice category. For instance, consumers must often spend a certain amount on certain expenditure categories (e.g., food) irrespective of their price before they can start spending on other categories (e.g., books), which makes them “captive” to some expenditure categories. Lastly, it also allows the possibility of estimating an “income effect” in addition to the substitution effect one can estimate with the MNL, but that is less clear to me from reading Gaudry and Dagenais’s article.

If you are interested in seeing what the distribution looks like for the dogit, here it is:

Obviously, all the p_{i}s have to be between zero and one and must all sum up to one. The \theta parameters are what’s new here–if they all equal zero, the dogit reverts to an MNL.

I have never estimated a dogit model, but in the interest of paying tribute to those who set me on the path to becoming an applied econometrician, I’d like to write something estimates a dogit model one day (though for better or for worse, I don’t often encounter multinomial choices in the things I study).

* I only know about the dogit because Marc Gaudry taught me the first econometrics class I’ve ever taken, and Marcel Dagenais taught me the second one, both when I was an undergraduate, and one day I decided to look up what my instructors’ contributions to their fields had been and stumbled upon it.