Abstract: Abstract of Distinction: Reviewing Past Research for Brighter Policy Futures: Evaluating the Quality of Meta-Analyses in a Review of Reviews (Society for Prevention Research 25th Annual Meeting)

186 Abstract of Distinction: Reviewing Past Research for Brighter Policy Futures: Evaluating the Quality of Meta-Analyses in a Review of Reviews

Schedule:
Wednesday, May 31, 2017
Bunker Hill (Hyatt Regency Washington, Washington DC)
* noted as presenting author
Robert Andrew Marx, MS, Doctoral Student, Vanderbilt University, Nashville, TN
Emily E. Tanner-Smith, PhD, Associate Research Professor, Peabody Research Institute, Nashville, TN
Joseph A Durlak, PhD, Professor Emeritus, Loyola University, Chicago, Chicago, IL
Mark W. Lipsey, PhD, Professor, Vanderbilt University, Nashville, TN
Prevention scientists have long urged policymakers to draw on rigorous evidence when making decisions, often emphasizing meta-analyses as the gold standard for research. This presentation will discuss some of the most common methodological weaknesses in meta-analyses of universal prevention programs. We hope this encourages review authors to adhere to common guidelines to offer the strongest evidence available and provides practitioners a framework for selecting the research upon which they base their decisions. Our systematic review identified over 80 published meta-analyses of universal prevention programs for youth that met our inclusion criteria. We used best practices for systematic review data collection and coded methodological and reporting quality. Results indicated several areas for improvement. Three areas of concern relate to reporting on primary studies: on average, meta-analysts do a poor job of reporting total numbers of participants (60% reported no information), their average age (80.5% reported no information), and demographic information (77% reported no gender information; 87% reported no racial/ethnic information). Potentially more troubling were issues related to the conduct of the meta-analyses themselves. Almost three-quarters (73%) of the meta-analyses either did not mention publication bias or did not perform statistical analyses to assess it. One-third (33%) did not employ double coding to ensure accurate data extraction. Almost one-third (30.5%) made no attempt to include grey literature. More than one-quarter (27%) did not address the quality of primary studies included. Approximately the same percentage (27%) did not report whether they employed a random- or fixed-effects model, and more than one-quarter (26%) did not report confidence intervals or standard errors for their effect sizes. Finally, almost one-quarter (24%) made no mention of heterogeneity of effect sizes. Further, we explored the association between these markers of quality and several publication factors including publication year, number of authors, journal impact factor, and multi-institution collaboration, to examine general trends in quality of meta-analyses. By sharing these common mistakes, we hope to improve the quality of evidence generated by meta-analysts and increase the confidence with which practitioners and policymakers can draw on such reports.