Abstract: Comparative Regression Discontinuity: Mitigating the Limitations of Regression Continuity Design (Society for Prevention Research 23rd Annual Meeting)

48 Comparative Regression Discontinuity: Mitigating the Limitations of Regression Continuity Design

Schedule:
Wednesday, May 27, 2015
Columbia Foyer (Hyatt Regency Washington)
* noted as presenting author
Yasemin Kisbu Sakarya, PhD, Assistant Professor, Koc University, Istanbul, Turkey
Thomas Cook, PhD, Professor and Faculty Fellow, Northwestern University, Evanston, IL
Yang Tang, PhD, Postdoctoral fellow, Northwestern University, Evanston, IL
Introduction: Regression discontinuity design (RDD) assign individuals to conditions using a cutoff score on a continuous assignment variable. Individuals on one side of the cutoff score receives the treatment, and individuals on the other side usually receives no treatment - the control condition. It is considered to produce closest results to randomized controlled trials (RCT) since the selection mechanism is known. However RDD requires correct specification of the functional form of the relation between the assignment variable and the outcome variable, has lower statistical power than RCT, and has limited generalized causal inference away from the cutoff score. The RDD literature has mostly focused on explicitly stating these limitations, but to a large extent has failed to develop methods to overcome them. However, there is a simple variant of RDD that can deal with these restrictions: comparative regression discontinuity (CRD).  Following Wing and Cook who studied adding the pretest measure as the comparison function for CRD, this study investigates the performance of CRD using a proxy-pretest as the comparison function in two situations – the situation where the parallel untreated regression functions assumption of CRD is met and the situation where it is not. The study also investigates using a non-equivalent comparison group as the comparison function for CRD. 

Method: A within-study-comparison where the treatment effect estimates coming from a synthetic RDD is compared to the estimates coming from an RCT is conducted. The RCT estimate of the treatment effect serves as the causal benchmark in order to assess the performance (i.e., bias in estimates, statistical precision) of the CRD treatment estimate. The synthetic RDD dataset is created from a highly structured RCT dataset by choosing a continuous assignment variable, deciding on a cutoff score, and systematically deleting the control cases above the cutoff and the treatment cases below the cutoff.

Results: Results showed that CRD (whether supplemented with proxy-pretest or a non-equivalent comparison group) produce unbiased treatment estimates when the parallelity assumption is met. However, when the assumption is violated, CRD produce highly unbiased estimates. In terms of statistical power, CRD has higher power than RDD independent of the violation of parallelity assumption. 

Conclusion: We highly recommend the use of CRD instead of RDD in policy research. Yet, researchers should be cautious about meeting the parallelity assumption of CRD.