Abstract: Testing Measurement Invariance in Longitudinal Data Using Ordinal Variables (Society for Prevention Research 22nd Annual Meeting)

181 Testing Measurement Invariance in Longitudinal Data Using Ordinal Variables

Schedule:
Wednesday, May 28, 2014
Columbia A/B (Hyatt Regency Washington)
* noted as presenting author
Yu Liu, MA, Research Assistant, Arizona State University, Tempe, AZ
Roger E. Millsap, PhD, Professor, Arizona State University, Tempe, AZ
Rika Tanaka, MA, Doctoral Student in Clinical Psychology, Arizona State University, Tempe, AZ
Jenn-Yun Tein, PhD, Research Professor, Arizona State University, Tempe, AZ
Prevention/ intervention programs aimed at altering the developmental trajectories of some behavioral or mental health problems (e.g., suicide, depression, eating behavior) have become more common. The evaluation of these programs will examine the longitudinal trajectories of the outcome and mediating processes (e.g., suicidal ideation, cultural values, impulsivity) using growth modeling. The accuracy of the growth modeling results hinges on the assumption of longitudinal measurement invariance, i.e., the relationship between the measures and the target latent construct is the same across time. However, the transition from childhood to adolescence, or from adolescence to adulthood, brings significant changes in thoughts and behavior patterns, and some measures of interest may be age sensitive. Hence, evaluation of longitudinal measurement invariance is imperative to draw valid conclusions about growth or change over time.

Given the common use of ordinal items (e.g., self-report Likert scales) in measures of key mediators and outcomes, a concern arises whether it is appropriate to treat them as continuous and use maximum likelihood (ML) estimation or robust ML estimation (MLR) in evaluating longitudinal measurement invariance. Some simulation studies suggest that when there are five or more response categories, using ML or MLR is acceptable. However, if the observed response distribution is rather skewed and answers mainly fall on, say three of the five categories, ML results are prone to biases. In such cases, using confirmatory factor analysis (CFA) models with ordinal scales is more appropriate.

This study illustrates through examples the test of longitudinal measurement invariance using CFA with ordinal variables. Three models are compared: a baseline model that assumes common factor structure over time, a loading invariance model that also assumes that factor loadings are the same across time, and a threshold invariance model that further assumes that for each item, the threshold level of going from one response category to the next (e.g., from “I somewhat believe this” to “I very much believe this”) is the same over time. We also present a way to gauge the practical significance of the violations of invariance. It is done by comparing the estimated probabilities associated with choosing a specific category on an item at a measurement wave, across different models (e.g., a model assuming loading invariance versus a model assuming threshold invariance). An R program has been developed to help researchers calculate these predicted probabilities from CFA outputs.