Abstract: Measurement Principles for Rapid Improvement of the Quality of Implementation Delivery (Society for Prevention Research 25th Annual Meeting)

41 Measurement Principles for Rapid Improvement of the Quality of Implementation Delivery

Schedule:
Tuesday, May 30, 2017
Columbia A/B (Hyatt Regency Washington, Washington, DC)
* noted as presenting author
Carlos Gallo, PhD, Research Assistant Professor, Northwestern University, Chicago, IL
Cady Berkel, PhD, Assistant Research Professor, Arizona State University, Tempe, AZ
Anne Marie Mauricio, PhD, Assistant Research Professor, Arizona State University, Tempe, AZ
Hilda M. Pantin, PhD, Professor, University of Miami Miller School of Medicine, Miami, FL
Guillermo Prado, PhD, Director, Division of Prevention Science and Community Health, University of Miami Miller School of Medicine, Miami, FL
C. Hendricks Brown, PhD, Professor, Northwestern University, Chicago, IL
PRESENTATION TYPE: Organized Paper Symposia Abstract

CATEGORY/THEME: Dissemination and Implementation Science

Background:A major challenge faced by researchers and practitioners alike is the accurate and feasible measurement of the quality of implementation, which is critical for the ability to monitor programs when they are delivered in the field and provide feedback for quality improvement. Behavioral observations, the gold standard method of implementation measurement, are time-consuming, expensive to collect, and require an expertise and bandwidth sometimes prohibitive for local agencies. In this presentation we introduce a conceptual monitoring and feedback system that can be used to identify implementation challenges in real-time, allowing local agencies to correct the intervention delivery in a timely fashion. This measurement system presents minimal burden to the implementing agency and is unobtrusive by applying automatic text mining, processing of nonverbal and verbal cues linked to quality of implementation.

Methods: We present our computational-based implementation quality measuring methods in three examples. The first uses emotion detection in the Good Behavior Game, a program in which teachers should respond calmly to disruptive behavior in class. The second identifies open-ended questions in Familias Unidas, a strategy facilitators use to engage family members in program discussions. The third identifies group leader utterances that indicate their belief in parent’s ability to use program skills in New Beginning Program for divorcing parents.

Findings: Using signal processing, machine learning, text mining, and knowledge engineering, we developed a proof of concept of computer-based fidelity rating system for each example. Emotion detection: our system can recognize emotion among cheerful, sad, and neutral tones with an accuracy of higher than 95%. Engagement Questions: A computer-based ratings for engagement is reliably correlated with human-based ratings with a kappa of .83. Belief in Participants Ability: A supervised machine classifier can accurately predict human-based ratings at the utterance-level to intervention protocol with accuracy higher than 90%.

Discussion: These results provide evidence for the feasibility of computer-based coding systems. Such coding systems can be used in a systems level approach to monitoring and maintaining quality implementation of programs delivered in community settings.