In the manualized PIJP intervention, sessions were video-recorded at seven implementing sites. Each video was independently coded using a standardized fidelity assessment tool. Trained coders evaluated a randomly selected video, achieving high inter-rater reliability (Kappa=0.848). Adherence to the manual was coded as: a) Delivered, b) Partially Delivered, c) Not Delivered & 4) Not able to be assessed. Inter-rater reliability was calculated using Cohen’s kappa statistics. Explanations of partial delivery and adaptations made were coded using open coding techniques.
In the principle-based, PPJS intervention, live observation of a sub-set of curriculum sessions with a check-list of “core concepts” was performed. Facilitators answered two assessment questions regarding participant engagement and understanding of concepts presented. Participants completed a session evaluation that measured understanding of core concepts presented (knowledge), and facilitator rapport. The investigators aimed to assess adherence and other site-specific nuances associated with learning and key outcomes.
Preliminary findings show that both fidelity assessment practices (manualized and principle-based) are complementary. The manualized version conveyed importance of fidelity to facilitators. Moderate inter-rater reliability was achieved with minimal training of coders. Nearly half of the PIJP content was delivered per the manual. On average, facilitators had high levels of facilitation skills. The principle-based fidelity addressed the necessity not only to capture adherence, but also whether the program or intervention achieved the desired results. It also provided insight on implementation that allows adaptability for “real life” situations without loss of effectiveness. However, this approach called for a highly-skilled evaluator and developing new indicators of that can be broadly used in community settings.