Abstract: Prevention System Monitoring with Big Data (Society for Prevention Research 23rd Annual Meeting)

506 Prevention System Monitoring with Big Data

Schedule:
Friday, May 29, 2015
Everglades (Hyatt Regency Washington)
* noted as presenting author
Carlos Gallo, PhD, Research Assistant Professor, Northwestern University, Chicago, IL
C. Hendricks Brown, PhD, Professor, Northwestern University, Chicago, IL
Juan Andres Villamar, MSEd, Executive Coordinator, Center for Prevention Implementation Methodology, Northwestern University, Chicago, IL
Different measuring methods have been developed to assess the fidelity of intervention implementation. These methods sample intervention data from varying degrees of quality and quantity. High quality data obtained from extensively trained human coders is generally reliable and very informative. This typically provides a global measurement of an intervention session (macro-coding). Such assessments are expensive and as a result are generally performed on a small fraction of audio or video taped sessions. As a result, the vast majority of delivered sessions are largely not analyzed. Current methods of fidelity measurement need to reduce the costs of measurements when increasing the sampling rate. With the advent of big data available from mobile phones and wearable sensors, gps and audio and video streaming, with have the opportunity to use advanced computational procedures to produce automated fidelity assessments. Frequent, inexpensive sampling provides a wealth of information to assess components of fidelity. In other words, automated systems involving big data can measure more often with less valid and reliable measures at a micro level. We investigate conditions under which the assimilation of large amounts of low resolution big data can be assembled to produce higher quality and cost effective measurement of fidelity.

We demonstrate through simulated data and collected data that the relative efficiency in detecting low fidelity sessions is a function of sample size and proportion of low fidelity sessions and variation over time. Specifically, by increasing how frequently we measure fidelity with micro coding of the session we can recognize a higher percentage of low fidelity sessions earlier than with infrequent macro coding. Such procedures often require a calibration step in processing so that individual intervention agent variation can be accounted for. We present data from audio clips that are classified automatically based on micro-coding (20 microsecond window) assessments of emotion. We find that by increasing sampling and aggregating multiple assessments we overcome error and better predict emotion in audio. This strategy also benefits from the wealth of information currently available from text, audio, video, and other collected passively (digital footprints) or generated actively.