Abstract: Developing a Multi-Criteria Decision Analysis Tool to Assess Decision Making Around Prevention Intervention Selection for Implementation (Society for Prevention Research 27th Annual Meeting)

593 Developing a Multi-Criteria Decision Analysis Tool to Assess Decision Making Around Prevention Intervention Selection for Implementation

Schedule:
Friday, May 31, 2019
Pacific N/O (Hyatt Regency San Francisco)
* noted as presenting author
Gracelyn Cruden, MA, Doctoral Student, University of North Carolina at Chapel Hill, Chapel Hill, NC
Kristen Hassmiller Lich, PhD, Assistant Professor, University of North Carolina at Chapel Hill, Chapel Hill, NC
Leah Frerichs, PhD, Assistant Professor, University of North Carolina at Chapel Hill, Chapel Hill, NC
Byron Powell, PhD, Assistant Professor, University of North Carolina at Chapel Hill, Chapel Hill, NC
Paul Lanier, PhD, Assistant Professor, University of North Carolina at Chapel Hill, Chapel Hill, NC
C. Hendricks Brown, PhD, Professor, Northwestern University, Chicago, IL
Introduction: Local decision makers are often responsible for selecting which interventions will be implemented. They do so while facing complex contexts, competing priorities, and limited resources. This study aimed to understand 1) how to support decision makers during the implementation planning phase, particularly as the select which evidence-based prevention interventions to implement while considering local context, and 2) to identify which factors are most important to decision makers when selecting interventions. Methods: We first collaboratively developed a multi-criteria decision analysis tool (MCDA) with eight Group Model Building stakeholders. Developing the MCDA tool involved three components: 1) determining the components of an intervention to be presented so that decision makers could evaluate the intervention, 2) identifying a list of criteria by which decision makers could rank the interventions, 3) assigning weights to each criterion, and 4) assigned a response scale for each criterion. Next, we had decision makers (n=14), including county social service agency directors, local partnership for children members, and state legislators complete the MCDA tool before and after participating in a brief systems science learning lab. During the learning lab, decision makers learned about systems science and how to use a system dynamics model that simulated the effect of the evidence-based interventions in a given community. Using a weighted sum approach to score the MCDA tool, we determined how decision makers ranked three evidence-based preventive interventions drawn from the California Evidence-Based Clearinghouse for Child Welfare. We then compared whether each decision maker’s ranking of the interventions changed after completing the learning lab. Results: We present the degree to which intervention ranks changed before and after decision makers participated in the learning lab, and highlight the criteria that were most important to decision makers when deciding whether to select an intervention for their community. These criteria include the parent population that was targeted, whether the intervention was currently implemented in the state, and the breadth of the outcomes targeted. Conclusion: These results can be helpful for prevention interventionists when determining what information to highlight when disseminating descriptions of their interventions and for determining which factors may be deemed a priority for adaptation by decision makers. Further, we find that the weight decision makers place on particular criterion can be influenced by a decision support simulation model, but ultimately, intervention characteristics and perceived fit with community needs and norms are more important to decision makers than perceived effectiveness.