Structural Equation Modeling Done Right

2 day course | Offering TBD

The technique of structural equation modeling (SEM) is widely used in many disciplines including psychology, education, communication, biology, medicine, and others. Unfortunately, many—and possibly most—published SEM studies have at least one flaw so severe that it compromises the scientific merit of the work. This is because there are certain poor practices in SEM that are relatively common, some of which are maintained by statistical myths about the conduct of SEM or about the interpretation of analysis results. These problems are compounded by widespread deficiencies in reporting apparent in the literature.

The point of this course is to reinforce best practices in SEM and thereby assist participants to avoid common pitfalls and shortcomings in the area. Four topics are emphasized: (1) How to report the results in ways that are transparent, complete, and respect updated reporting standards for SEM studies by the American Psychological Association. (2) How to avoid confirmation bias by directly addressing the phenomenon of equivalent models that fit the data just as well as the researcher’s target model but with contradictory hypotheses about causation. (3) How to properly and thoroughly evaluate model fit, a critical part of deciding whether to retain or to reject a model. (4) Preregistration of the analysis plan is also described as a best practice when a more exploratory phase of the analysis is expected.

INSTRUCTOR: Dr. Rex Kline, PhD

In this course, you will learn about how to follow the best practices in SEM as just summarized. Course topics include

  • Review of the content in APA reporting standards for SEM studies
  • The proper role of significance testing versus model indexing in evaluating global model fit
  • Identification of myths about model fit statistics, especially about thresholds of approximate fit indexes that supposedly signal “good” model fit
  • The role of evidence for local model fit, ignored in too many studies, in the form of residuals is explained
  • Types of residuals are defined, including covariance, correlation, standardized, and normalized residuals
  • How to generate equivalent structural or measurement models is described
  • How to plan, organize, and describe the analysis plan—including preregistration of that plan—in clear and transparent ways

Best practices covered in this course do not rely on the use of any particular computer tool or software package for SEM. Instead, the concepts and skills are those that any researcher should know or have mastered regardless of whether they use Mplus, lavaan, LISREL, Amos, or other any computer program. Thus, the course is about ideas, not about computer skills.

This course about best practices should benefit a range of participants, from researchers-in-training, such as graduate students, up thorough current researchers more experienced with SEM and who seek to upgrade their knowledge. The overall goal is to help participants distinguish their work whether submitted as a thesis or dissertation to a research committee or manuscripts with SEM analyses submitted to journals. Participants should have some prior exposure to SEM, such as in a course or through its application in research projects. The best practices covered in the course do not require expert-level knowledge of SEM. By the end of course participants will have learned some key ways to improve their future applications of SEM.

Upon completing this course, you will

  • Understand the contents of reporting standards for SEM studies including the need to describe both global fit and local fit, or the residuals, in written summaries
  • Know how to interpret residuals of different types, including covariance, correlation, standardized, or normalized residuals
  • Avoid common false interpretations of global fit statistics, including the model chi-square and approximate fit indexes
  • Understand that failure to directly acknowledge the existence of equivalent or near-equivalent models is a form of confirmation bias
  • Be able to generate for your readers at least a few equivalent models and appreciate that rational argument, not statistical analysis, is the only way to prefer one equivalent model over another
  • Understand the role of preregistration as way to reduce hypothesizing after the results are known, or harking, which is the undisclosed presentation of exploratory analyses as though they were confirmatory

A certificate of completion from the Canadian Centre for Research Analysis and Methods is provided at the end of the course.