Abstract: Economic Evaluations of Substance Abuse Prevention Programs for Youth Aged 8-18: A Systematic Review (Society for Prevention Research 24th Annual Meeting)

526 Economic Evaluations of Substance Abuse Prevention Programs for Youth Aged 8-18: A Systematic Review

Schedule:
Thursday, June 2, 2016
Pacific D/L (Hyatt Regency San Francisco)
* noted as presenting author
Gitanjali Shrestha, MA, Graduate Student, Washington State University, Pullman, WA
Laura Griner Hill, PhD, Professor and Chair, Washington State University, Pullman, WA
Introduction: Decision makers are increasingly relying on the results of economic evaluations to foster program accountability and maximize resource allocation efficiency. So, it is important to conduct high quality economic evaluations of prevention programs. We conducted a systematic review of peer-reviewed economic evaluations of substance abuse prevention programs aimed at youth between the ages of 8-18. Our study contributes to the field of prevention science by identifying strengths and weaknesses of published economic evaluations and their corresponding intervention articles, thereby informing future research endeavors.

Method: We assessed reporting quality of economic evaluations along with the methodological quality of intervention articles on which the economic evaluations were based. We included RCTs, quasi-experimental studies, and non-experimental studies that quantitatively measured outcomes of interest. In addition to searching eight databases, we used “cited by” Google feature, and searched reference lists of economic evaluations and intervention articles to identify peer-reviewed economic evaluations of youth substance abuse prevention programs.

Results: We identified 11 studies that included economic evaluations; 8 met criteria for adequate rigor of the intervention study. These eight economic evaluations assessed nine prevention programs. All prevention programs were universal in nature. Almost all programs targeted some form of tobacco use. Most of the programs were implemented in USA. The methodological quality of intervention articles and reporting quality of economic evaluations varied widely. Most intervention articles either did not test for baseline equivalence or did not control for baseline differences in their analyses. Most articles either did not report attrition rate or had high attrition rates (more than 20%). Almost none of the articles reported power analysis. The most popular technique for conducting an economic evaluation was cost-effectiveness analysis, followed by cost-utility analysis, and benefit-cost analysis. Most economic evaluations reported a discount rate; however, none of the evaluations with multiyear intervention discounted costs incurred after first year of intervention. Almost all economic evaluations reported results of sensitivity analyses with most of them addressing parameter uncertainty but not methodological uncertainty. Most economic evaluations did not address generalizability of their findings.

Conclusion: Given the policy implications of economic evaluation results, we need to adhere to a common set of reporting standards to improve clarity, transparency, and comparability of results. Future economic evaluations should place more emphasis on alcohol, cannabis, and other substances.