Abstract: A Comparison of Methods for Testing Differences of Mediated Effects (Society for Prevention Research 21st Annual Meeting)

481 A Comparison of Methods for Testing Differences of Mediated Effects

Schedule:
Friday, May 31, 2013
Grand Ballroom A (Hyatt Regency San Francisco)
* noted as presenting author
Jason Williams, PhD, Research Psychologist, RTI International, Durham, NC
Introduction: Mediation analysis has become an integral part of examining psychological and behavioral theories. As theories have evolved and become more complex, so too have mediation analyses, with variants for causal inference, multilevel mediation. A natural extension of mediation hypotheses is to question how indirect effects compare, either across different mediators with a common outcome or across groups such as gender or school grade. Despite a mediation contrast method put forward by MacKinnon (2000), comparisons of two or more mediated effects have traditionally been imprecise and lacking in statistical rigor as well as conceptual murkiness about definitions of moderated mediation. Recently, several newer methods of comparing mediated effects have been suggested in the methodological literature (e.g., Willaims & MacKinnon, 2008, Chan, 2007) and may facilitate comparing mediation across two mediators or in testing moderation of a mediated effect by group membership. Unfortunately, the statistical performance of most of these tests has been relatively unclear. This study was undertaken to address this knowledge gap.

Methods: A simulation study compared 5 tests of mediation contrasts: Wald confidence intervals (CI), percentile bootstrap CI, bias-corrected CI, Likelihood-based CI, and a test based on dummy latent variables (DLV). Comparisons across groups and across different mediators were examined and the simulations varied sample size (N=50, 100, 250, 500), and path coefficient (0, .14, .39. .59, corresponding to previous simulations of mediated effects). Models with two paths (a single mediator) and three paths (two mediators) were examined. Tests were evaluated on Type I error, power, and confidence interval coverage.

Results: Preliminary results suggest that the percentile bootstrap and likelihood based CIs may be optimal. Both the bias-corrected bootstrap and the DLV method in particular have inaccurate (too high) Type I error for contrasts of effects with a true difference of zero when one effect has a medium or large effect size. Power to detect nonzero but small differences was greatest with these two methods however. As the difference between mediated effects increases, all five tests perform with comparable power. All findings for two path effects were magnified for three path effects (two mediator models).

Conclusions: Both the percentile bootstrap and likelihood CIs offer the best balance of power and Type I error. The bias-corrected bootstrap and DLV method offer greater power but also have too high Type I error in some situations. The Wald CI test should be avoided as there are superior methods available.