Abstract: Concordance Between Provider and Independent Observer Ratings of Quality of Delivery in the New Beginnings Program (Society for Prevention Research 25th Annual Meeting)

542 Concordance Between Provider and Independent Observer Ratings of Quality of Delivery in the New Beginnings Program

Schedule:
Friday, June 2, 2017
Lexington (Hyatt Regency Washington, Washington DC)
* noted as presenting author
Anne Marie Mauricio, PhD, Assistant Research Professor, Arizona State University, Tempe, AZ
Cady Berkel, PhD, Assistant Research Professor, Arizona State University, Tempe, AZ
Carlos G. Gallo, PhD, Research Assistant Professor, Northwestern University, Chicago, IL
Irwin N. Sandler, PhD, Regents' Professor, Arizona State University, Tempe, AZ
Sharlene Wolchik, Ph.D., Professor, Arizona State University, Tempe, AZ
Jenn-Yun Tein, PhD, Research Professor, Arizona State University, Tempe, AZ
C. Hendricks Brown, PhD, Professor, Northwestern University, Chicago, IL
Monitoring implementation of evidence-based programs (EBPs) in community settings poses considerable challenges. Behavioral observation by independent raters is reliable and valid, but resource-intensive and infeasible for community agencies. Provider self-report is cost-effective, pragmatic, and can motivate self-directed skill improvement, but the validity and reliability of self-report is debatable. One dimension of implementation that is particularly challenging to assess is quality of delivery (quality), defined as the skill with which providers deliver material and interact with participants.

This study compares the reliability and predictive validity of quality ratings across independent observers (IO) and provider self-reports using implementation data from the New Beginnings Program (NBP) effectiveness trial to evaluate whether providers can validly and reliably self-report on the quality of their delivery. The NBP is a 10-session parenting EBP for divorced parents with separate mother and father groups. We will also examine if reliability and validity of provider ratings differ based on providers’ training as well as gender and size of the intervention group. 50 NBP intervention groups (24 = father; 26 = mother) were implemented in the effectiveness trial. Providers coded quality of 1 pre-determined activity immediately after each session. For sessions 2, 3, 5, and 8, providers coded the quality of home practice (HP) review; this study uses provider and corresponding IO ratings of HP quality. In total, providers and IOs rated 25 items operationalizing quality. The number of sessions parents attended and parent ratings of group cohesion at posttest, assessed using the Moos Group Environment Scale, were used to test the predictive validity of provider and IO ratings.

Preliminary correlational analyses show that parent-reported group cohesion significantly correlated with IO ratings for 19 of the 25 items. Unexpectedly, IO ratings correlated with attendance for only 3 items. Provider ratings on only 4 of the 25 items correlated with parent reports of group cohesion or attendance. ICC analyses to assess provider-IO reliability and moderation analyses for provider and group variables are currently in progress.

Preliminary analyses suggest provider ratings of quality may not be valid. Research suggests that logistical barriers rather than program satisfaction are the primary reasons parents do not attend sessions. This is consistent with our finding that quality was more strongly associated with parents’ subjective experiences of the intervention process than with attendance. To be determined is whether providers or group characteristics influence providers’ ability to produce valid and reliable ratings.