Abstract: Using a Meta-Analytic Technique to Assess the Relationship Between Treatment Intensity and Program Effects in a Cluster-Randomized Trial (Society for Prevention Research 23rd Annual Meeting)

349 Using a Meta-Analytic Technique to Assess the Relationship Between Treatment Intensity and Program Effects in a Cluster-Randomized Trial

Schedule:
Thursday, May 28, 2015
Columbia A/B (Hyatt Regency Washington)
* noted as presenting author
Joshua R. Polanin, Ph.D., Post-doctoral, Vanderbilt University, Nashville, TN
Dorothy Espelage, PhD, Professor, University of Illinois, Champaign, IL
Introduction

School bullying and delinquent behaviors are persistent and pervasive problems for schools, and have lasting effects for all individuals involved (Copeland, Wolke, Angold, & Costello, 2013; Espelage, Low, Rao, Hong, & Little, 2013). As a result, policymakers and practitioners have attempted to thwart these ill-effects using school-based interventions. Recent meta-analyses have found, however, that these programs produce only moderate effects (Ttofi & Farrington, 2011). Consequently, it is important to investigate further the reasons for such findings. One promising analysis is to assess the relation between treatment intensity variables and program outcomes. Unfortunately, few treatment intensity variables have been utilized in the school-based prevention literature, and it is often cumbersome to model the relation between treatment intensity and outcomes should variables be available.

Methods

To this end, we sought to measure implementation variables and to use them in a meta-analytic model to assess the relationship between implementation and treatment effectiveness. The context for this project is a large-scale, multi-site, cluster-randomized trial; 36 schools, 144 teachers, and 3,616 students participated in three waves of data collection. Schools were randomly assigned to treatment or control condition within pairs. We measured how long teachers prepped for the lesson and lesson length. Further, we collected information on whether the teachers spent extra money on lessons, consulted with someone outside the classroom for preparation of lesson, and most importantly if the teachers reteach components of the program outside of the lesson. We also measured students’ bullying perpetration, victimization, physical aggression, homophobic perpetration and victimization, and sexual violence perpetration and victimization. A standardized mean-difference effect size was calculated for each pair of schools using the student data, producing 18 effect sizes. A novel meta-analytic meta-regression model, applying the robust variance estimation technique (Hedges, Tipton, & Johnson, 2010), was estimated. We used the teacher’s implementation information to model the treatment effects across each pair of schools (Konstantopoulos, 2011).

Results

The results indicated that, for the second wave of data collection (i.e., the first posttest), stronger treatment effects were found when teachers and program implementers spent a greater amount of time prepping lessons (b = -.07, SE = .02, p< .01), provided additional financial resources, and received outside consultation and support. Results form the third wave of data collection did not yield significant relationships. 

Conclusions

Finally, we conclude that (a) researchers should measure and report implementation information, and (b) use a meta-analytic technique to inform the relationship.