Abstract: Alternative Strategies for Routine Monitoring of Implementation Quality (Society for Prevention Research 24th Annual Meeting)

332 Alternative Strategies for Routine Monitoring of Implementation Quality

Schedule:
Thursday, June 2, 2016
Seacliff C (Hyatt Regency San Francisco)
* noted as presenting author
B.K. Elizabeth Kim, PhD, Postdoctoral Scholar, University of California, Berkeley, Berkeley, CA
Valerie Shapiro, PhD, Assistant Professor, University of California, Berkeley, Berkeley, CA
Kelly Whitaker, PhD, postdoctoral fellow, University of Washington, Seattle, CA
Sophie Shang, na, undergraduate student, University of California, Berkeley, Berkeley, CA
Shelby Lawson, na, Student, University of California, Berkeley, Berkeley, CA
Introduction: Monitoring implementation fidelity is essential to implementation success and intervention effectiveness (e.g., Borelli, 2011). Implementation monitoring, however, is challenging in routine practice (Durlak & Dupre, 2008; Han & Weiss, 2005). Two commonly recommended monitoring strategies are external agent observations and high-frequency implementer self-report logs. These methods, however, are extremely resource intensive and often constrained by missing data. Many studies have found limited correlations (Schoenwald et al., 2011) between these monitoring strategies, but found internal consistency over time within modality (Kim et al., 2015; Breitenstein, et al., 2010).  This internal consistency has led implementation scientists to wonder if less frequent data collection, covering larger spans of time, might be a reliable and valid approach to routine monitoring of implementation quality. Two reviews using retrospective fidelity ratings have shown promise of this methodology (Bond, et al., 2000). This paper seeks to understand whether evoking a memory of implementing specific lessons (to facilitate retrospective recall) leads to consistent within-implementer reports of implementation quality across lessons.

Methods: Data come from the study of a district-wide implementation of a universal SEL curriculum in 11 elementary schools that provides teachers with lesson plans and teaching strategies. An online survey was administered to teachers at the end of the 2014-2015 year. A response rate of 73% (n=137) was achieved, yielding a sample with 84% female and 59% European American respondents. Teachers were asked to identify their most favorite and their least favorite lesson taught in the past year. Questions (e.g., preparation time, teaching time, quality of lesson delivery, student engagement) about these specific lessons were then asked. We compared the extent to which teachers’ self-assessment of implementation quality were consistent or differed between the two evoked lessons.  

Results: Teachers reported the same amount of time (approximately 20 minutes) preparing for the two lessons evoked (p=.78). On the other hand, teachers reported differences in the amount of time delivering the lesson (p<.001; d=1.00), the quality of delivery (p<.001; d=1.34), and the student engagement (p<.001; d=1.76) with the two different lessons evoked. In addition to an analysis of internal consistency, these large differences also show evidence of differential validity.

Implications: Researchers and program administrators are often looking for efficient methods for assessing implementation quality. This study evoked specific lessons to try to identify the range of implementation reports that might be retrospectively recalled. Counter to real-time reports that are highly consistent within-implementer, large differences were found using this retrospective strategy.