Schedule:
Friday, June 2, 2017
Bunker Hill (Hyatt Regency Washington, Washington DC)
* noted as presenting author
Thomas Cook, PhD, Professor and Faculty Fellow, Northwestern University, Evanston, IL
Regression discontinuity design (RDD) assign individuals to conditions using a cutoff score on a continuous assignment variable. Individuals on one side of the cutoff score receives the treatment, and individuals on the other side usually receives no treatment - the control condition. However, RDD produces causal results that are only unbiased at the RDD cutoff; it requires correctly specifying how the assignment and outcome variables are related; and it is less precise than randomized experiment. One promising approach to overcoming these limitations is to add an untreated outcome function coming from a nonequivalent comparison group to the basic RDD structure and to include it in the outcome analysis. This creates a Comparative Regression Discontinuity design (CRD) whose key assumption is that its three untreated regression segments are parallel. Two studies comparing randomized experiment and CRD results when this assumption holds have shown that CRD reduces imprecision and increases valid causal generalization for all treated cases and not just those at the cutoff. However, less is known about the performance of CRD when the parallel untreated segments assumption fails. This study investigates to which extent the deviation from this parallelity assumption creates bias in CRD treatment estimates.
Method: Study I is a within-study-comparison where the treatment effect estimates coming from a synthetic RDD is compared to the benchmark estimates coming from an RCT. The synthetic RDD dataset is created from a highly structured RCT dataset by choosing a continuous assignment variable, deciding on a cutoff score, and systematically deleting the control cases above the cutoff and the treatment cases below the cutoff. Study II is a large Monte Carlo simulation designed to test bias and precision of CRD treatment estimates under different degrees of violation of parallel untreated segments assumption, treatment effect size, correlation between the assignment variable and the binary treatment indicator, and ratio of added cases to original N.
Results: Within-study-comparison study results shows that CRD produce unbiased treatment estimates when the parallelity assumption is met. However, when the assumption is violated, CRD produce biased estimates. Results of the ongoing simulation study will demonstrate how robust the CRD treatment estimates to different degrees of violation of the parallelity assumption.
Conclusion: We highly recommend the use of CRD instead of RDD in policy research. Yet, researchers should be cautious about meeting the parallelity assumption of CRD.