Skip to content

64 search results for "job market"

The Goal of Scientific Communication Is Not to Impress But to Be Understood

One of our PhD student, whose work focuses on the Supplemental Nutrition Assistance Program (SNAP, writes:

I would like your opinion on something. When I presented this [paper] in the past, I have received requests to include more background slides and information on SNAP (history, participation rates, eligibility rules, etc.), the poverty line (how it is calculated, etc.), as well as diff-in-diffs (parallel trends). I did not have these details included as I thought most people know this stuff but that was obviously not the case for a few people who saw me present the paper in the past. Yesterday, however, I have received some feedback about that information being redundant. This is not my job-market paper, but as I prepare for job-market talks, what do you suggest I do? Include background on the more common concepts and methods or skip them? Do you have some general advice in deciding how much of that to include?

‘Metrics Monday: Good Things Come to Those Who Weight–Part I

I was sitting in my office on Friday afternoon when one of our third-year PhD students dropped by with an applied econometric question: “When should I use weights?”

After telling her to go read Solon et al.’s 2015 piece in the JHR symposium on empirical methods, I decided to reread that paper for myself and blog about it this week. In the near future, in part II, I’m hoping to tackle Andrews and Oster’s new NBER working paper on weighting for external validity.

Before I begin, some clarification: throughout this post, I’ll be discussing the use of sampling weights. If you are a Stata user, this refers to that statistical package’s -pweight-, i.e., “weights that denote the inverse of the probability that the observation is included because of the sampling design.” I have never had to rely on -aweight-, -fweight-, or -iweight-, so I wouldn’t know when to use them.

Suppose you oversample a specific group in order to get more precise estimates for that group. For instance, suppose you are interested in the opinion of LGBTQ students. If you randomly sample individuals from a given population of students, you may not have enough LGBTQ respondents in your sample, and so whatever descriptive statistics you come up with for that sub-group might be too noisy. Thus, you may wish to over-sample LGBTQ respondents in order to improve precision. What I mean by this is that you would randomly sample respondents from each group–LGBTQ and non-LGBTQ–until you have the right number. So if you target a sample size of n=100 and you’d like 50% respondents from each group, you split the population in two groups (assuming that’s easy to do; in the case of LGBTQ students, it might not be easy to do) and sample from each until each group has 50 observations.

‘Metrics Monday: One IV for Two Endogenous Variable, and Testing for Mechanisms

A few months ago, a post in this series discussed a recently published article in the American Political Science Review by Acharya et al. (2016, ungated version here) in which the authors developed a method to test whether a mediator variable [math]M[/math] is a mechanism whereby treatment variable [math]D[/math] causes outcome variable [math]Y[/math].

At the time, I suggested to one of my PhD students that she should use that method to test for a presumed mechanism in her job-market paper, but since her identification strategy was based on an IV, it really wasn’t clear that Acharya et al.’s method could be applied to her research question.

A few weeks ago, a new working paper by Dippel et al. (2017) was released titled “Instrumental Variables and Causal Mechanisms: Unpacking the Effect of Trade on Workers and Voters.” Although Dippel et al.’s application is really timely–Do trade shocks cause people to vote for populist parties by turning them into disgruntled workers?–I’ll focus in this post on their methodological innovation.