Schedule:
Tuesday, May 29, 2018
Columbia A/B (Hyatt Regency Washington, Washington, DC)
* noted as presenting author
Ji Hoon Ryoo, PhD, Assistant Professor, University of Virginia, Charlottesville, VA
Elise Pas, PhD, Associate Scientist, Johns Hopkins University Bloomberg School of Public Health, Baltimore, MD
Rashelle Musci, Ph.D., Assistant Professor, The Johns Hopkins University, Baltimore, MD
Catherine Bradshaw, PhD, Professor and Associate Dean for Research & Faculty Development, University of Virginia, Charlottesville, VA
Introduction: The calculation of power in empirical studies using multilevel designs can be addressed using two approaches: using statistical formulas embedded within software for which users enter their parameters based on a series of assumptions or past data (e.g., Optimal Design (OD)) and utilizing Monte Carlo simulation (e.g., in Mplus and Stata). The former is considered as an a priori power analysis, as it is conducted prior to collecting data, utilizing assumed and statistically acceptable numbers. The latter simulation approach could be a practical and viable alternative. Simulation-based power analysis could allow for the examination of more-often encountered issues for power, such as missing data patterns and can utilize various statistical models for outcome analyses. Although both approaches have been accepted for multilevel study designs, extant literature does not well-address the issue of variability across clusters or small sizes at the within-cluster, individual level. In this paper, we examine the effect of variation of the number of individual-level units on power estimates using both of these approaches.
Method: Power computations for multilevel (e.g., teachers within schools) group-randomized trials were of interest, and thus data from an active randomized controlled trial were used to compare power calculations using these two approaches. Power analysis is conducted using OD and will also be conducted using a simulation study, in order to address the issue of variability in and small sample sizes in such cluster-randomized trials with individual outcomes.
Results: Based on the previous study, we obtained average ICCs of 0.15 for the teachers’ outcomes nested within school. The average value of ICCs is slightly larger than Murray and Short (1995)’s estimates of 0.01-0.05 for typical mental health measures, however, teacher surveys are expected to have larger variability across schools. By considering an effect size of 0.35, the error rate of 0.05, and a total of 40 schools across four cohorts, we need 10 or 11 teachers in each school to achieve the power of 0.8. If we have 15 teachers in each school, we obtain the power of 0.88. The simulation results will also be presented in the paper.
Conclusions: Although we considered a conservative power estimation according to the variation of individual units in practice using the OD, this would increase research costs. Moerbeek, van Breukelen, & Berger (2001) point out that OD has a problem associated in power computation, as it does not account for the variation of the number of individual units. A simulation study on power analysis in multilevel design will expand on the results, as shown in Scherbaum & Ferreter (2009).