Abstract: Implications for Design, Sampling, and Measures When Both Conditions in a Randomized Control Trial Implement an Evidence-Based Program (Society for Prevention Research 24th Annual Meeting)

189 Implications for Design, Sampling, and Measures When Both Conditions in a Randomized Control Trial Implement an Evidence-Based Program

Schedule:
Wednesday, June 1, 2016
Grand Ballroom C (Hyatt Regency San Francisco)
* noted as presenting author
Yibing Li, PhD, Researcher, American Institutes for Research, Washington, DC
Nick Yoder, PhD, Researcher and Technical Assistant Consultant, American Institutes for Research, Chicago, IL
Recent advances in prevention science have resulted in more conceptually grounded and empirically validated preventive interventions to promote social-emotional development in elementary schools (Jones, Brown, Hoglund, & Aber, 2010). However, various methodological challenges have limited the quality and generalizability of the knowledge base (Hundert et al., 1999). First, few studies have employed school-level randomized designs to support definitive causal claims about intervention impacts on outcomes. Second, despite wide-spread recognitions that interventions are processed at multiple levels of the school system (e.g., students, classrooms), few studies have systematically documented how the intervention unfolds in these nested systems. Additionally, few studies have recorded how variations in intensity and quality of implementation are linked to outcomes across levels and domains of functioning.

This study is the first to assess whether a coordinated and integrated model for school-wide support combined with an evidence-based program is more effective in improving student outcomes compared to implementing the program on its own. This differs from most studies that compare the program to a no-intervention comparison condition. The evidence-based program both groups implement in this study is Promoting Alternative Thinking Strategies (PATHS). The intervention group implements PATHS in addition to a schoolwide model of support that is based on the CASEL Guide for Schoolwide Social and Emotional Learning. This allows for monitoring and documentation of program implementation in both groups, and these data can be linked to various outcomes. In this paper we summarize the design of the evaluation, the sampling plan, and present measures used to document implementation, process, and outcomes across levels and domains.

The evaluation uses a matched-pair cluster randomized design. A pairwise matching procedure was used prior to randomization to maximize demographic and contextual similarity between participating schools . The study collects data on (1) perceptions and attitudes of instructional and administrative staff, (2) implementation data from teachers, and (3) multiple domains of student functioning including social-emotional competencies, behavior, and academic performance. More than 90 percent of teachers (236 out of 257) provided implementation data, half of these teachers were also observed implementing PATHS. About 60 percent of school staff members responded and consented to participate in the study. More than 1100 students were rated by their teachers in terms of their social-emotional competence, behavior, and academic engagement. Issues related to establishing baseline equivalence, missing data, and analytic challenges will be discussed.