Abstract: Embracing ‘Failure’ in Prevention Science: How Can We Promote a More Open and Honest Response to Trial Results Showing That Interventions ‘Don’t Work’ or Cause Harm? (Society for Prevention Research 27th Annual Meeting)

147 Embracing ‘Failure’ in Prevention Science: How Can We Promote a More Open and Honest Response to Trial Results Showing That Interventions ‘Don’t Work’ or Cause Harm?

Schedule:
Wednesday, May 29, 2019
Garden Room B (Hyatt Regency San Francisco)
* noted as presenting author
Nick Axford, PhD, Associate Professor in Health Services Research, University of Plymouth / PenCLAHRC, Plymouth, United Kingdom
Vashti Berry, PhD, Senior Research Fellow, University of Exeter, Exeter, United Kingdom
Jenny J Lloyd, PhD, Senior Research Fellow, University of Exeter, Exeter, United Kingdom
Katrina Wyatt, PhD, Professor of Relational Health, University of Exeter, Exeter, United Kingdom
Tim Hobbs, PhD, Director, Dartington Service Design Lab, Buckfastleigh, United Kingdom
In fields such as aviation, learning from failure to achieve desired outcomes is an embedded process intended to optimize performance. In health and social care, by contrast, it is often not clear how learning from failure affects the commissioning of services or research; indeed, there can be a tendency to cover up or explain away such events. We see evidence of this behavior in prevention science when trial results show no or harmful effects. Examples include not publicising findings, conducting spurious sub-group analyses or attributing the outcome post hoc to real or perceived weaknesses in trial design or execution. This is unhelpful for several reasons, not least that it contributes to research ‘waste’, undermines respect for science and potentially stifles risk-taking innovation, at best leading to incremental change.

This paper explores common policy and research responses to finding that an intervention is ineffective or harmful, such as dismissing the results, decommissioning the intervention, continuing with the ‘failed’ intervention in the absence of a better option or because it meets other criteria, and adapting the intervention and testing those adaptations. Some of these responses are illustrated through brief case studies of null effect trials conducted by the authors in subject areas such as obesity, social-emotional learning and early years support. Each case study describes the trial results, what happened next and, as best as can be established, why.

The paper suggests that the nature of each stakeholder’s response(s) is affected by, inter alia, the nature of the ‘failure’, how much they have invested in the intervention (financially, psychologically, politically and organisationally), the extent to which they accept the trial findings, the availability (or lack) or alternatives, and whether they buy into the evidence-based practice paradigm.

The paper concludes by advancing several strategies to promote a more open and honest approach towards trials of interventions that show null or harmful effects. These strategies are categorized as ‘pre-empting’, ‘preparing for’, ‘acknowledging’ and ‘responding to’ such findings. The main message from the symposium is that the real failure in prevention science is a failure to learn from and act on disappointing results.