I teach a PhD class in applied empirical methods for Strategy research. The course emphasizes various approaches for causal inference using observational data. The main text is Angrist and Pischke.
I have organized my class around the idea of empirical etiquette: the radical idea that authors of academic papers should try to be polite to their readers. Sometimes good etiquette is method-specific, and other times it is not.
While I have found many great sources of information on the technical details of various methods for causal inference, there is no single reference on the main points of empirical etiquette. So, in the spirit of Emily Post or David Levine's cheap advice, I offer the following suggestions.
Here is a list of things that nearly every empirical paper should discuss, regardless of the particular method employed.
- Discuss the economic significance of your results. How does one interepret the estimates, and how much variance do they explain? Don't bury this. Put it in the introduction.
- Describe the ideal experiment that you would perform in a completely unconstrained world. What variable would you have to manipulate to answer your research question, and how would you do it? Be specific about measurement.
- Discuss what you see as the primary threat to causal inference. Is it an example of omitted variables, selection or simultaneity / reverse causality? What sort of bias would it produce if you ran a simple OLS regression?
- Right after you describe the main threat, explain in plain english how your empirical strategy tries to address it. This discussion will vary by method (see below), but should always be as explicit as possible about assumptions you are asking the reader to buy into.
- Explain what you are measuring. Your choice of models and methods will dictate whether you are estimating a Population Average Treatment Effect, or a Treatment Effect for the treated, or maybe something else. What is it?
- Don't start by presenting the results of a really complicated model. Build up from somthing simpler, like OLS.
- When it's all over, go back and see whether the fancy methods produce answers that differ from the simple ones (i.e. do some Hausman-like tests).
Selection on Observables (regression control and matching)
- State your maintained assumptions: The treatment (i.e. the main explanatory variable) is uncorrelated with all unobservables (i.e. the eror term) after matching and/or conditioning on observables.
- Justify your choice of controls. This has two parts: a) they are exogenous (no bad controls!), and b) they are somehow reltaed to your main threat.
- Show that means (and maybe other moments) of any exogenous control variables are balanced across treatment and controls groups after matching.
Diff-in-Diffs and Panel Data
- State your maintained assumptions: Mean outcome changes in the control group are a valid estimate of counterfactual mean outcome changes in the treatment group.
- Check the maintained assumption on pre/post-treatment data. Sometimes this is called a pre-trends test. One particularly effective approach is what I call the Magic Picture.
- If enrollment varies over time, discuss the merits of the pre/post estimator for treated observations (assuming exogenous timing of enrollment) versus the typical diff-in-diffs estimator (assuming selection into treatment is exogenous conditional on fixed effects). Consider reporting both.
- Assuming you find a main effect, use some interactions to explore heterogeneity in the impact of treatment.
- Discuss the clustering of stadnard errors. Why did you pick a certein type of clustering and is it robust? This can matter in panel data.
IV is sort of a special case, since there is actually a huge literature on what could be construed as IV etiquette, and Angrist and Pischke (not surprisingly) go to twon on the subject. Still, here is my two cents.
- State your maintained assumptions (1): You have an instrumental variable (Z) that is correlated with the outcome (Y) only through its effect on the treatment (T).
- State your maintained assumptions (2): Tell the story behind your IV. Why is it correlated with the treatment, and why is it plausibly uncorrelated with everythign else in Y? Few IVs are perfect, so just give it your best shot.
- Report, or at least discuss, the first stage regression (T on Z) and the reduced form regression (Y on Z) in addition to the main IV results.
- Report, the first-stage F tests and maybe other statistical test for weak instruments.
- If you have more then one instrument (lucky you!) report Sargan's test of over-identifying restrictions. Failure to pass this test does not imply your IV's are invalid, but may suggest a LATE interpretation of your results, where each instrument measures a different effect.
I'm a little shakier here. Someday soon I will write an RD paper and tighten up my proposed etiquette.
- State your maintained assumptions: There is a discontinuous change in the probability of treatment in a neighborhood of the discontinuity. Everything else changes smoothly.
- Explain whether the discontinuity is sharp (probability of treatment jumps from zero to one) or fuzzy (something smaller, since not all observations are compliers).
- Provide a graphical analysis of the first stage (change in treatment at discontinuity) and the reduced form (change in treatment at discontinuity). This is what makes RD so cool!
- Check that exogenous control variables do not exhibit any discontinuous jumps around the cut point.