Despite differences in methodological approaches used to develop predictive analytics, common criteria are needed to ensure results are reliable, valid, equitable and useful. Methods and the reasons for ensuring inter-rater reliability will be discussed, with some discussion of why inter-rater reliability is stressed over inter-item reliability. The importance of predictive validity and methods for testing it will be outlined, followed by a parallel outline of measures and methods of equity in findings. Sensitivity and specificity are ideal measures of validity when decisions informed by predictive analytics are dichotomous (yes or no service decisions), but have limited application when service decisions are multi-faceted with many levels of service provided. Nuances of equity will be addressed, as well as the importance of assessing equity by cultural groups represented in the population, geographic regions and any other relevant subgroups. Lastly, the paper and presentation will outline how to ensure the utility of predictive analytics for practitioners, and how to monitor the use and accuracy of Predictive Analytic tools in practice settings to ensure no unintended consequences. Examples from prevention programs, child protective service and juvenile justice service agencies will be referenced to emphasize and demonstrate the need for these common evaluation approaches. The final discussion will summarize the need and rationale for common, prevention science theory-defined criteria and measures for evaluating Predictive Analytic tools used in practice.