Abstract: Implementation Quality and Program Outcomes: A Quasi-Experimental Test of Keepin' It REAL (Society for Prevention Research 21st Annual Meeting)

116 Implementation Quality and Program Outcomes: A Quasi-Experimental Test of Keepin' It REAL

Schedule:
Wednesday, May 29, 2013
Pacific B (Hyatt Regency San Francisco)
* noted as presenting author
Jonathan Pettigrew, PhD, Assistant Professor, University of Tennessee, Knoxville, Knoxville, TN
John W. Graham, PhD, Professor, Penn State University, University Park, PA
Michael L. Hecht, PhD, Distinguished Professor, Penn State University, State College, PA
Michelle Miller-Day, PhD, Professor, Chapman University, Orange, CA
YoungJu Shin, PhD, Assistant Professor, Indiana University - Purdue University, Indianapolis, Indianapolis, IN
Janice L. Krieger, PhD, Assistant Professor, Ohio University, Columbus, OH
Introduction. The dissemination of evidence-based programs raises important questions about the quality of program implementation across various sites. Poor implementation quality (IQ) is known to reduce program effects; thus, examining factors such as participant responsiveness, quality of delivery, and adherence to content are an important part of program evaluation.

Methods. As part of a larger trial, two versions of the keepin’ it REAL (kiR) 7th grade drug prevention intervention were implemented in 25 schools in rural school districts in Pennsylvania and Ohio. Teachers (n = 31) implementing kiR in 78 different classes were directed to set up a camcorder in the back of the room to record each of 10 lessons and were provided a $10 incentive for completing a short on-line evaluation after each lessons as well as mailing videos of each lesson to project staff. IQ was measured though observational coding of approximately four videos per class. Specific variables included adherence, teacher engagement (attentiveness, enthusiasm, seriousness, clarity, positivity), a global rating of teacher delivery quality, and student engagement (attention, participation). An exploratory factor analysis showed that teacher and student engagement and delivery quality formed one factor, which we labeled delivery. We used adherence and delivery as IQ variables in analysis. Self-report student surveys measured outcomes of interest including refusal and response efficacy, descriptive and injunctive norms, and substance use. Surveys were administered on scannable forms prior to program delivery and in the spring following program delivery.

Results. We used a mixed model design that accounted for missing data to examine IQ effects on classroom level outcomes, statistically controlling for school level effects. In order to rule out the possibility that IQ has no effect on any of the DVs we ran a single omnibus test using a summary IQ variable predicting a summary DV while controlling for pre-test levels, then ran three separate omnibus tests for each of the three DV categories (efficacy, norms, drugs). These procedures helped control against experiment-wise type I error. IQ significantly predicted the summary DV (p= .04). Delivery also significantly predicted use (p = .005) and norms (p = .03), but not efficacy (p = .42) and adherence significantly predicted norms (p = .02), but not use (p = .11) or efficacy (p = .20).

Conclusions. This study suggests that IQ is analogous to dosage: when delivered well, students show positive outcomes compared to poor implementation. Our study also supports measuring multiple aspects of IQ, although they may empirically be part of only a single latent construct.