Abstract: Assessing the Validity and Equity of Predictive Analytics (Society for Prevention Research 27th Annual Meeting)

329 Assessing the Validity and Equity of Predictive Analytics

Schedule:
Thursday, May 30, 2019
Pacific A (Hyatt Regency San Francisco)
* noted as presenting author
Kristen R Johnson, PhD, Prevention Science Coordinator, University of Minnesota-Twin Cities, Saint Paul, MN
Numerous human service fields are exploring how predictive analytics can inform decisions and improve the effectiveness of practice. Predictive Analytics (PA) is an empirical approach that relies on available data, usually system administrative data, to develop screening tools or alert systems to help direct resources and improve service decisions. Predictive Analytics and other risk assessment methods inform decision making in most juvenile justice and adult corrections, and is growing popular among child protection, child support and financial assistance agency staff. System administrative data often reflect disparities in case decisions and resources, however, and use of screening tools developed using predictive analytics may perpetuate these disparities.

Social service system data such as child welfare, juvenile justice and adult corrections often reflect ethnic, geographic or other disparities. Specifically, individuals and families of color are frequently overrepresented; they are more likely to have a prior history of system involvement and sometimes are more likely than white individuals and families to have subsequent system involvement. The base rate, the proportion of individuals who experience the outcome of interest, can differ significantly by ethnicity. This can result in a PA tool or resulting risk classification that is more accurate for one subgroup than another is, and/or a high-risk classification in which the proportion experiencing the outcome of interest differs by ethnicity. This issue can be confounded when jurisdictions implement a PA tool without validating it for the local population, and/or implement it without adequate support or monitoring. A PA tool can only be effective if it accurately and equitably estimates risk for the population, is reliable, and is well implemented in practice.

To ensure that predictive analytics reduce rather than exacerbate disparities, tool developers must pay careful attention to analytical validation measures and methods as well as reliability and effectiveness when implemented. Unfortunately, developers infrequently demonstrate the equity of tool accuracy across subpopulations. Most measures used to evaluate the accuracy of predictive analytical tools are relative in nature (i.e. inform whether one tool is more accurate than another is) and are applied to an overall sample.

The focus of this presentation will be to review the strengths and limitations of existing measures of validity and equity for PA using examples from child welfare and juvenile justice. The presentation will also review existing methods to examine the use of predictive analytics in practice; i.e. the fidelity of tool implementation. Session attendees will walk away with a strong understanding of when, how and for whom to evaluate the accuracy of predictive analytical tools, and the importance of evaluating equity as well as validity.