What is it about?

We review the limitations of model fit indices, which limit the validity of inferences that are based on them. In doing so, we encourage psychopathology researchers to think about the process of model selection more broadly and approach model fit assessments more cautiously.

Featured Image

Why is it important?

First, we discuss the common misconception that fit statistics can identify the “best model,” arguing that the mechanical application of model fit indices contributes to faulty inferences. We illustrate the consequences of this practice through examples in the literature on quantitative psychopathology. Second, we highlight the parsimony-adjacent concept of fitting propensity, which is not accounted for by commonly used fit statistics. Finally, we present specific strategies to overcome interpretative bias--in particular, "pro-bifactor interpretative bias"--and increase the generalizability of study results and stress the importance of carefully balancing substantive and statistical criteria in model selection scenarios.

Perspectives

Evaluating and retaining models solely based on good model fit does not qualify as model selection. When comparing how well a set of latent variable models fits the data, a bifactor model will invariably accommodate the data best. However, “accommodation” is not the same as useful “description.”

Dr. Ashley L. Greene
James J. Peters VA Medical Center

Read the Original

This page is a summary of: Model fit is a fallible indicator of model quality in quantitative psychopathology research: A reply to Bader and Moshagen., Journal of Psychopathology and Clinical Science, August 2022, American Psychological Association (APA),
DOI: 10.1037/abn0000770.
You can read the full text:

Read

Contributors

The following have contributed to this page