Session: Testing the Reliability and Validity of Methods to Increase the Feasibility of Implementation Monitoring (Society for Prevention Research 25th Annual Meeting)

4-046 Testing the Reliability and Validity of Methods to Increase the Feasibility of Implementation Monitoring

Schedule:
Friday, June 2, 2017: 2:45 PM-4:15 PM
Lexington (Hyatt Regency Washington, Washington DC)
Theme: Dissemination and Implementation of Science
Symposium Organizer:
Cady Berkel
Discussant:
Lisa Saldana
Implementation monitoring is essential for both research and practice. Implementation trials to test different methods of supporting the implementation of evidence-based programs in community settings are a growing area of research. Measures of fidelity to the curriculum and quality of delivery are recommended outcomes for these trials. Because the target of the trial is typically at a system or agency level, feasibility of measurement is an important consideration. Monitoring of implementation is also necessary to maintain the effects of evidence-based programs when they are delivered in community settings. Communities lack the resources to engage in gold standard methods and consequently, if implementation is monitored at all, they often turn to methods that have not been validated. This organized symposium presents three studies which test strategies for reducing the burden associated with implementation monitoring. These studies make use of the effectiveness trial of the New Beginnings Program, an evidence-based parenting program to prevent child substance use and maladjustment following parental divorce. Multi-informant, multi-method data on multiple dimensions of implementation were collected, permitting these innovative analyses.

Measures of fidelity to program curriculum can range from less than 10 to hundreds for a single session. Pragmatic approaches with fewer items typically lack sensitivity to detect variability. To maintain sensitivity, but create more feasible measures, the first study provides an example of the use of machine learning methods to analyze behavioral observation ratings of fidelity to the curriculum to identify the most parsimonious set of items with predictive validity.

Although the use of provider self-report has been controversial, it is both pragmatic and potentially useful for quality improvement. In the second study, concordance across raters (independent observer and provider self-report) and predictive validity for the quality of delivery is assessed at the item level to determine whether there are items on which providers can self-report.

Automated coding methods using machine learning approaches may be the end goal in making implementation monitoring truly feasible at a population level, if research can demonstrate their predictive validity. In the third study, the predictive validity of automated methods for assessing quality of delivery based on session transcripts is tested.

The discussant will provide a critique the findings of these studies and discuss the implications for measuring implementation in trials and community settings.


* noted as presenting author
541
Application of Machine Learning Methods to Identify Valid Measures of Fidelity in Evidence-Based Parenting Programs
Cady Berkel, PhD, Arizona State University; Carlos G. Gallo, PhD, Northwestern University; Anne Marie Mauricio, PhD, Arizona State University; Irwin N. Sandler, PhD, Arizona State University; C. Hendricks Brown, PhD, Northwestern University; Sharlene Wolchik, Ph.D., Arizona State University
542
Concordance Between Provider and Independent Observer Ratings of Quality of Delivery in the New Beginnings Program
Anne Marie Mauricio, PhD, Arizona State University; Cady Berkel, PhD, Arizona State University; Carlos G. Gallo, PhD, Northwestern University; Irwin N. Sandler, PhD, Arizona State University; Sharlene Wolchik, Ph.D., Arizona State University; Jenn-Yun Tein, PhD, Arizona State University; C. Hendricks Brown, PhD, Northwestern University
543
Validating Computer-Based Methods for Assessing Quality in Parent-Training Behavioral Interventions
Carlos G. Gallo, PhD, Northwestern University; Cady Berkel, PhD, Arizona State University; Anne Marie Mauricio, PhD, Arizona State University; Irwin N. Sandler, PhD, Arizona State University; C. Hendricks Brown, PhD, Northwestern University