Abstract: Let's Get Technical: Enhancing Program Evaluation through the Use and Integration of Technologies (Society for Prevention Research 23rd Annual Meeting)

491 Let's Get Technical: Enhancing Program Evaluation through the Use and Integration of Technologies

Schedule:
Friday, May 29, 2015
Concord (Hyatt Regency Washington)
* noted as presenting author
Frank Materia, MHS, Research Assistant, The Pennsylvania State University, State College, PA
Elizabeth A. Miller, MS, Graduate Research Assistant/Graduate Student, The Pennsylvania State University, State College, PA
Megan Runion, BA, Graduate Research Assistant, The Pennsylvania State University, State College, PA
Ryan Chesnut, PhD, Research & Evaluation Scientist, The Pennsylvania State University, State College, PA
Jamie Irvin, BA, Research Assistant, The Pennsylvania State University, State College, PA
Cameron Richardson, PhD, Research & Evaluation Scientist, Penn State Univeristy, State College, PA
Program evaluation has become increasingly important over time, with information on program performance often driving funding decisions (Government Performance & Results Act of 1993; Wholey, 2010). Technology use and integration can help to ease the burdens associated with the evaluation of evidence-informed programs by reducing the resources needed (e.g., time, money, staff) and increasing evaluation efficiency. By employing technologically innovative evaluation methods, prevention scientists across disciplines can more easily collect data on program processes (e.g., implementation fidelity) and outcomes (e.g., participant behavior change).

Computer, Internet, cell phone, and smartphone use have increased rapidly in the past two decades (Pew Research Center, 2014), and the increasing accessibility and use of these technologies (tech tools) presents new opportunities for program evaluators. Parallel advancements have also been made in the accessibility, quality, and user friendliness of the tech tools available to applied researchers, yet many evaluators, especially those on the front lines of program implementation, may be unaware of these tools and their potential uses. Although we are not the first to note the value of technology in applied research, existing literature tends to be narrowly focused on technology use in specific disciplines or on specific technologies. Our aim in this paper is to describe various tech tools that are broadly applicable to program evaluators in any field.

This paper highlights four specific categories of technologies – web based surveys, websites, prompting & participant communication systems, and real-time assessment tools. We pay special attention to technological ease of use, relative cost, and fit with populations. Given the many ways in which technology can be used in program evaluation, we discuss for each technology its applications to key aspects of program evaluation, specifically registration of participants, tracking and retention of participants, process evaluation (e.g., fidelity, assignment completion), and outcome evaluation. After reviewing technologies and their applied uses in program evaluation, we discuss how these tools can be integrated to enhance data collection, data flow, data management, and ultimately the larger goal of program evaluation. We highlight important limitations of and considerations for technology integration, including the level of technical skill and cost needed to integrate various technologies, as well as ethical considerations and research protections. Lastly, we illustrate how technology integration can enhance program evaluation by presenting two case studies of technology use in program evaluations conducted by an applied research center at a large land grant university.