Nonconvergence, Covariance Constraints, and Class Enumeration in Growth Mixture Models Daniel McNeish1, Jeffrey R. Harring2, & Daniel J. Bauer3 1Arizona State University, Tempe, USA 2University of Maryland, College Park, USA 3University of North Carolina, Chapel Hill, USA Contact Information Daniel McNeish, PO Box 871104, Arizona State University, Tempe, AZ 85287. Email:
[email protected] Acknowledgements The authors were supported by the Institute of Educational Sciences program in statistical and research methodology, project number R305D190011. The content is solely the responsibility of the authors and does not represent the official views of the Institute of Educational Sciences. The data in the empirical were supplied by the original authors with a Creative Common BY 4.0 International license that allows use provided that proper attribution is provided to the original authors. We thank these authors for their transparency and commitment to making scientific research an open process. Abstract Growth mixture models (GMMs) are a popular method to identify latent classes of growth trajectories. One shortcoming of GMMs is nonconvergence, which often leads researchers to apply covariance equality constraints to simplify estimation. This approach is criticized because it introduces a dubious homoskedasticity assumption across classes. Alternative methods have been proposed to reduce nonconvergence without imposing covariance equality constraints, and though studies have shown that these methods perform well when the correct number of classes is known, research has not examined whether they can accurately identify the number of classes. Given that selecting the number of classes tends to be the most difficult aspect of GMMs, more information about class enumeration performance is crucial to assess the potential utility of these methods.