Abstract: Fidelity Monitoring of an Evidence-Based Program Via Audio or Video Recordings: What Is Lost with Audio Only? (Society for Prevention Research 25th Annual Meeting)

166 Fidelity Monitoring of an Evidence-Based Program Via Audio or Video Recordings: What Is Lost with Audio Only?

Schedule:
Wednesday, May 31, 2017
Columbia C (Hyatt Regency Washington, Washington, DC)
* noted as presenting author
Ashwini Tiwari, PhD, Lawson Post Doctoral Fellow, McMaster University, Hamilton, ON, Canada
Daniel Whitaker, PhD, Professor, Georgia State University, Atlanta, GA
Introduction: Fidelity monitoring of evidence-based programs (EBPs) is critical to maintaining program effectiveness from research to practice. Two aspects of fidelity are often targeted in EBPs: delivery of the specific program components (content), and the level of communication and rapport building between providers and clients (process). There is growing recognition of the importance of direct observation of providers’ sessions as part of fidelity monitoring. Live observation is highly rigorous and allows for real-time feedback, but can be highly impractical, costly and intrusive, especially for home-based services. Two other common methods of assessing fidelity in community settings are video and audio recordings. Audio, a more practical approach than video, can be unobtrusively made and transmitted. However, it is not known what aspects of fidelity may be lost when rating by audio compared to video recordings. The purpose of this research was to examine how these two methods of fidelity monitoring (audio and video) compare when used for a home-based parenting model (SafeCare®).

Methods: Twenty-five SafeCare sessions between home visitors and parents were video-recorded. Trained coders were randomly assigned to score sessions, either using both the video and audio portions of the recording, or only the audio. Sessions were coded using fidelity checklists consisting of 11 process items, and 17 content items. Each item was coded as having occurred or not. In addition, coders could rate an item as “technological limitation”, in that they could not code an item because of the method. Analyses compared levels of agreement and disagreement between audio and video coders across process and content items.

Results: Analyses indicate overall agreement between coders at 71.3% (SD=10.8). Agreement was higher for content items (M= 78.8%, SD=13.1) than process items (59.2%, SD=12.9). Disagreements due to technology limitations among audio coders was noted among 13 items; technology limitations were reported much more frequently among process items (54.5%) than content items (11.8%). Video and audio coders experienced 100% disagreement due to these limitations across 3 items, all classified as process.

Conclusions: In contrast to video, monitoring fidelity via audio recordings is associated with some loss of process-related fidelity. One possible solution is to employ audio recordings to monitor fidelity but in conjunction with alternative, supplementary methods, such as participant surveys, to better capture process items. Research should also examine the extent to which content and process fidelity relate to changes in family behavior to further inform optimal fidelity monitoring methods for program use.