Abstract: Estimating Mediated Effects in Pretest-Postest Control Group Designs (Society for Prevention Research 22nd Annual Meeting)

71 Estimating Mediated Effects in Pretest-Postest Control Group Designs

Schedule:
Wednesday, May 28, 2014
Regency D (Hyatt Regency Washington)
* noted as presenting author
Matthew J. Valente, BS, Graduate Research Assistant, Arizona State University, Tempe, AZ
David P. MacKinnon, PhD, Professor, Arizona State University, Tempe, AZ
Introduction: Pretest-posttest control group designs are common in prevention research where there is random assignment of units and a mediator and outcome are measured before and after the intervention is delivered to one group.  The purpose of this abstract is to describe statistical and methodological aspects of this common prevention design.  Estimating mediated effects in prevention research helps determine what aspects of programs were effective. Researchers can address the question: “Is my program affecting the targeted mediator variables?” and it can allow researchers to address the question: “Are the hypothesized mediator variables affecting the outcome in the hypothesized way?”  Despite the popularity and usefulness of these designs and the importance of testing mediated effects (MacKinnon, 2008), there is a lack of formal information regarding the performance of various statistical methods in estimating mediated effects in pretest-posttest designs. Some common methods include analysis of difference scores, analysis of residualized change scores, analysis of covariance with pretest scores used as a covariate, and path analysis. We seek to investigate the conceptual basis of these methods and assess the performance of the methods in a simulation study.

Method: A Monte Carlo simulation study was used to investigate the statistical performance and confidence interval coverage of the different methods for pretest-posttest designs.  The Monte Carlo study will consist of sample sizes often found in prevention research: 50, 100, 200, 500, and 1000 and parameter sizes corresponding to zero, small, medium, and large effect sizes.  

Results: Results for a sample size of 200 suggest that path analysis provides the most accurate Type 1 error rate for the effect of the independent variable on the mediator at posttest (5%), the most accurate power when there is a small effect (30.4%), a medium effect (99.4%) and a large effect (100%). Further, path analysis provides the most accurate Type 1 error rate for the effect of the mediator at posttest on the outcome variable at posttest when there is no effect (5.4%), the most accurate power when there is a small effect (30.3%), a medium effect (99.2%) and a large effect (100%).

Conclusions: Results suggest that path analysis performs the best in assessing mediated effects in pretest-posttest designs. It is possible path analysis performed the best because this method matches the generating model used in this simulation.