Abstract: Application of Machine Learning Methods to Identify Valid Measures of Fidelity in Evidence-Based Parenting Programs (Society for Prevention Research 25th Annual Meeting)

541 Application of Machine Learning Methods to Identify Valid Measures of Fidelity in Evidence-Based Parenting Programs

Schedule:
Friday, June 2, 2017
Lexington (Hyatt Regency Washington, Washington DC)
* noted as presenting author
Cady Berkel, PhD, Assistant Research Professor, Arizona State University, Tempe, AZ
Carlos G. Gallo, PhD, Research Assistant Professor, Northwestern University, Chicago, IL
Anne Marie Mauricio, PhD, Assistant Research Professor, Arizona State University, Tempe, AZ
Irwin N. Sandler, PhD, Regents' Professor, Arizona State University, Tempe, AZ
C. Hendricks Brown, PhD, Professor, Northwestern University, Chicago, IL
Sharlene Wolchik, Ph.D., Professor, Arizona State University, Tempe, AZ
Introduction:Implementing a program with fidelity to the curriculum would appear to be important in achieving targeted outcomes. Typically, studies have found a positive relation between fidelity and outcomes, however, this relation has not been consistently verified. In studies failing to find an effect of fidelity, measurement may be an issue. In many cases, fidelity items are written at a global level to increase measurement feasibility, however, this can result in items that are not specific enough to uncover variability in delivery. A potential solution for the need to balance the number and specificity of fidelity items is to assess types of items that best predict outcomes. Machine learning methods can be used to examine predictive validity at the item level.

Methods: Our data consists of independent observer ratings of fidelity for 470 sessions of the New Beginning Program (NBP) for divorcing parents. Based on our mediation model suggesting that delivery influences participant responsiveness, which in turn determined program outcomes, predictive validity analyses were conducted using LASSO to select the fidelity items that predict attendance and competent skills practice at the following session. We report on the feature selection, which is done as part of the model construction process. We will also examine possible differences across ethnicity and gender to determine if different aspects of delivery have differential relevance across groups.

Results:From an initial set of 92 fidelity items, LASSO identified 47 items that predicted attendance at the following session with minimal predictive error. Items were spread across activity types (i.e., didactic, skills practice, and home practice review) activities. Many of the most predictive items related to allaying parent concerns about role playing program skills and doing the home practice with children. We will repeat the process with indicators of skills practice as outcomes.

Conclusions: Typical fidelity monitoring measures are either excessively lengthy or lack specificity to detect variability in delivery. The use of machine learning strategies, such as LASSO, can narrow the focus to those items of fidelity that are most important to measure. In this way, these methods may also help to elucidate core components of programs that are most closely tied to program outcomes.