CCRAM 2022

CCRAM Sessions 2022

Presented by the Canadian Centre for Research Analysis and Methods. Sessions begin June 2022 in Calgary, Alberta.

Join the CCRAM Sessions - in person

Featuring Canada's leading research methodology experts, the CCRAM Sessions 2022 will be held in-person at the University of Calgary, downtown campus. Choose from two, three or five-day course blocks. Attendees should expect to enhance and broaden your skills in data analysis and research design. In addition, this is a great opportunity to expand your professional and personal network, and find new collaborators. CCRAM is an educational resource for burgeoning and veteran researchers in academic, government, and industry who rely on behavioural science methods – in whatever field and wherever in the world you are located – seeking to update and expand your training in research methods and data analysis.

Spots are limited, register today.

Reduced rates are available until June 1, 2022.

CCRAM Banner

Canadian Centre for Research Analysis and Methods (CCRAM) is the preeminent Canadian destination for academics and researchers to learn from the country’s leading behavioural science methodologists.

Scale Development and Psychometrics

Jessica Flake | McGill University

Three-day course (June 20-22)

Fees:
Earlybird pricing: $1,495 CAD
After May 18: $1,720 CAD


Introduction to Multilevel Modeling

Jason Rights | University of British Columbia

Three-day course (June 20-22)

Fees:
Earlybird pricing: $1,495 CAD
After May 18: $1,720 CAD


Doing Open and Replicable Science

Felix Cheung | University of Toronto

Two-day course (June 23-24)

Fees:
Earlybird pricing: $995 CAD
After May 18: $1,145 CAD


Structural Equation Modeling Done Right

Rex Kline | Concordia University

Two-day course (June 23-24)

Fees:
Earlybird pricing: $995 CAD
After May 18: $1,145 CAD


Meta-Analysis

Piers Steel | University of Calgary

Five-day course (June 26-30) (Course ends at noon on the 30th)

Fees:
Earlybird pricing: $1,995 CAD
After May 18: $2,295 CAD


Mediation, Moderation and Conditional Process Analysis

Andrew Hayes | University of Calgary

Five-day course (June 26-30) (Course ends at noon on the 30th)

Fees:
Earlybird pricing: $1,995 CAD
After May 18: $2,295 CAD

Visit Calgary

Experience Calgary

The CCRAM Sessions will be hosted at the University of Calgary, downtown location. Take advantage of this opportunity to visit Calgary, Alberta. Only an hour drive away from the extraordinary Canadian Rocky Mountains. Must-see destinations include Banff National Park, Jasper National Park and Canmore. In warmer months, you can canoe or kayak across the many beautiful lakes, hike and camp in the tall forests, and breathe in the fresh mountain air.

Explore Calgary's vibrant downtown. Calgary’s culinary scene delivers flavours from all over the world with hundreds of great restaurants to choose from. Rich in arts, culture, entertainment and leisure activities, there’s always something to do in Calgary.

Getting to Calgary

Calgary is easy to get to from various destinations around the world. With one of the world's most modern and welcoming airports, getting to Calgary by air is easy with commercial airline access and other options available to travellers.


CCRAM Session Details

Location

All courses will be delivered in person at the University of Calgary’s downtown campus located at 906 8th Avenue SW, Calgary, Alberta, Canada.

Time

Classes begin at 9.00AM and end between 4:00PM and 5:00pm each day (except for June 30th, when classes end at noon). All classes will take lunch for one hour at noon.

Meals

Lunch will be provided on course days.

Accommodations

If you require accommodations while attending the CCRAM summer courses, we recommend the following hotels in the surrounding area of the University of Calgary Downtown Campus. We have provided information on a selection of hotels, that provide a range of options to best fit your needs. Below you will find information on the hotels, such as price, distance from the campus as well as instructions on booking with the University of Calgary preferred rate. 

Price per night: $99

Distance from University of Calgary Downtown Campus: 200m, 2-minute walk

Instructions on booking: "Booking Instructions: Guests can make their own reservations online at www.sandmanhotels.com and book directly with the following steps: 1. Select the Sandman Signature Calgary Downtown, the dates you require, number of rooms, and the number of guests in the rooms, then click on Book Now. In the event that the hotels general inventory is limited the hotel may show no availability after you click the Book Now button. There will be a space to Add a Code. 2. Use drop down menu in the promo code box and select Web Group Code, then enter 2206HASKAY in the box below and click add.

Alternatively, you can call our 24-hour Central Reservations office at 1-800-726-3626 / 1-800-SANDMAN. In order to receive the correct rates, callers must reference: Sandman Signature Calgary Downtown, quoting Block ID 35387 or Haskayne School of Business Group.

Price per night: $199

Distance from University of Calgary Downtown Campus: 700m, 9-minute walk, 2-minute drive

Last day to book: Monday May 16, 2022

Instructions on booking: Please have the guests in your group directly call Marriott Reservations at Hotel Toll-Free Phone Number +1 587-885- 2288 on or before Monday, May 16 2022, (the “Cutoff Date”) to make their sleeping room reservations. Please identify yourself as part of the University of Calgary Executive Education staying at the Residence Inn by Marriott Calgary Downtown / Beltline District, located at 610 - 10th Avenue SW Calgary, Alberta T2R 1M3.

Price per night: $209

Distance from University of Calgary Downtown Campus: 1.3 km, 16-minute walk, 5-minute drive

Last day to book: Thursday May 19, 2022

Price: $279

Distance from University of Calgary Downtown Campus: 1.4km, 18-minute walk, 5-minute drive


Cancellation Policy

If you need to cancel your registration or withdraw from your registered program, emailed notice must be submitted to a representative of Haskayne School of Business Executive Education.

Cancellation or withdrawal of your registration will incur the following fee:
• $100 for notice of cancellation/withdrawal from the program received 31 days or greater prior to the program start date
• The fee amount equivalent to 25% of the program cost, up to a maximum of $500, for notice of cancellation/withdrawal from the program received between 30 and 15 days prior to the program start date
• The fee amount equivalent to 100% of the program cost, for notice of cancellation/withdrawal from the program received 14 days or less prior to the program start date.

Should you be unable to attend a registered program due to acts of God, war, government regulations, disaster, strikes, civil disorder, curtailment of transportation facilities, pandemic, or other emergencies making it illegal or impossible to travel, emailed notice must be submitted to a representative of Haskayne School of Business Executive Education. You will be required to pay the $100 program deposit. All other cancellation fees will be waived.

Questions?
If you have questions, please contact us at (403) 220-6600 or by email ccram@ucalgary.ca.

Scale Development and Psychometrics

Researchers in the academic and private sectors often need to measure some aspect of people’s psychology be it their attitudes, satisfaction, motivation, or intentions. We assume that the numbers these scales, questionnaires, tests, and surveys produce are meaningful: that someone with a higher satisfaction score is in fact more satisfied than someone with a lower score. Because scale scores are used to make decisions like how to measure critical outcomes in a research study, develop a product, or admit a student or promote an employee, researchers need to thoroughly evaluate their validity. This short course will cover how to develop, evaluate, and refine scales using modern psychometric methods.

In this course, you will learn how to apply modern validity theory and psychometric methods to appropriately develop and use scales measuring psychological attributes.

  • Overview of construct validity theory and types of validity evidence
  • Item writing
  • Item content review and think-aloud protocol
  • Executing and interpreting item analysis
  • Overview of types of factor analysis
  • Executing and interpreting exploratory factor analysis
  • Executing and interpreting reliability analysis
  • Interpreting and evaluating validity evidence for scale selection and use

The course will focus on scale development and refinement with psychometric methods that can be implemented in many statistical software packages. Because this is a hands-on course, learners are encouraged to bring a laptop to class with a copy of R or SPSS installed. However, instruction will focus on demonstrating the statistical techniques in multiple software programs and it isn’t required that students be an expert in any specific software. Provided materials and examples will include examples from various software packages such as SPSS, SAS, and R.

This course will be helpful for researchers in any field —including psychology, sociology, education, business, human development, social work, public health, communication and others that rely on social science methodology —who want to develop and use scales to measure psychological attributes. Learners should have background knowledge in introductory statistics topics such as univariate statistical tests, descriptive statistics, and correlation. Ideally learners should be comfortable with multiple regression techniques. Though proficiency in a specific software isn’t required, ideally participants will have some familiarity with running analyses using some type of statistical software (e.g., R, SPSS, SAS, STATA).

Upon completing this course, you will

  • Be able to define construct validity and describe different forms of validity evidence
  • Evaluate scale items for poor, confusing, or problematic wording
  • Use descriptive statistics to quantitatively evaluate item properties
  • Use qualitative approaches to review item content
  • Compare different approaches to factor analysis
  • Compare different approaches to quantifying reliability
  • Execute and interpret an exploratory factor analysis
  • Execute and interpret a reliability analysis
  • Evaluate multiple sources of validity evidence to select a scale
  • Evaluate multiple sources of validity evidence to develop or refine a scale
Jessica Flake, Ph.D.

Instructor: Jessica Flake, Ph.D.

McGill University

Courses taught for CCRAM:

Scale Development and Psychometrics

Dr. Flake is an Assistant Professor of Quantitative Psychology and Modelling at McGill University. She received an MA in quantitative psychology from James Madison University and a a PhD in Measurement, Evaluation, and Assessment from the University of Connecticut. Her research develops and applies latent variable models for use in psychological research with an emphasis on improving measure development and use. Her work is highly cited and published in top methodological and substantive outlets such as Nature: Human Behavior, Psychological Methods, Advances in Methods and Practices in Psychological Science, Structural Equation Modeling, Psychological Science, and the Journal of Personality and Social Psychology. She was named an Association of Psychological Science Rising Star in 2021 and received a Society for the Improvement of Psychological Science Commendation in 2020 for her research into questionable measurement practices.

Her work focuses on technical and applied aspects of psychological measurement including scale development, psychometric modelling, and scale use and replicability. She is a top-rated professor in the Department of Psychology at McGill University, regularly teaching measurement and statistics courses as well as workshops at international conferences. Further, she routinely works in applied psychometrics as a technical advisory panel member for the Enrollment management Association, a non-profit that develops educational assessments, and serves as the Assistant Director for Methods at the Psychological Science Accelerator, a laboratory network that conducts large-scale studies.

Luong, R. & Flake, J.K. (in press). Measurement invariance testing using confirmatory factor analysis and alignment optimization: A tutorial for transparent analysis planning and reporting. Psychological Methods.

Flake, J. K., Shaw, M., & Luong, R. (in press). Addressing a crisis of generalizability with large-scale construct validation. Behavioral and Brain Sciences.

Flake, J.K. (2021). Strengthening the foundation of educational psychology by integrating construct validation into open science reform. Educational Psychologist. 56, 132-141.

Beymer, P.N., Ferland, M., & Flake, J.K. (2021). Validity evidence for a short scale of college students’ perceptions of cost. Current Psychology, 1-20.

Hwang, H., Cho, G., Jung, K., Falk, C., Flake, J.K., & Jin, M. (2021). An approach to structural equation modeling with both factors and components: Integrated generalized structured component analysis. Psychological Methods, 26, 273–294

Flake, J.K., & Fried, E.I. (2020). Measurement schmeasurement: Questionable measurement practices and how to avoid them. Advances in Methods and Practices in Psychological Science, 3, 456-465.

Shaw, M., Cloos, L., Luong, R., Elbaz, S. & Flake, J.K. (2020). Measurement practices in large-scale replications: Insights from Many Labs 2. Canadian Psychology, 61, 289-298.

Hehman, E., Calanchini, J., Flake, J. K., & Leitner, J. B. (2019). Establishing construct validity evidence for regional measures of explicit and implicit racial bias.  Journal of Experimental Psychology: General. 148 (6) 1022-1040.

Flake, J.K., & McCoach, D.B. (2018). An investigation of the alignment method with polytomous indicators under conditions of partial measurement invariance. Structural Equation Modeling: A Multidisciplinary Journal, 25 (1), 56-70.

Flake, J.K., Pek, J., & Hehman, E. (2017). Construct validation in social and personality research: Current practice and recommendations. Social Psychological and Personality Science, 8, 370-378

Flora, D. & Flake, J.K. (2017). The purpose and practice of exploratory and confirmatory factor analysis in psychological research: Decisions for scale development and validation. Canadian Journal of Behavioural Science, 49, 78-88.

Goldstein, J., & Flake, J.K. (2016). Towards a framework for the validation of early childhood assessment systems. Educational Assessment, Evaluation and Accountability, 23, 273-293 .

Flake, J. K., Barron, K. E., Hulleman, C., McCoach, B. D., & Welsh, M. E. (2015). Measuring cost: The forgotten component of expectancy-value theory. Contemporary Educational Psychology, 41, 232–244.


Introduction to Multilevel Modeling

Multilevel modeling (MLM; also known as hierarchical linear modeling or linear mixed effects modeling) is widely used to analyze nested data structures in a variety of fields, including psychology, education, biology, and organizational research. Common examples of such structures include datasets in which students are nested within classrooms, patients are nested within clinicians, and repeated measures are nested within individuals. MLM provides an intuitive framework by which researchers can accommodate the dependency of observations within the same cluster (e.g., similarity of students within the same class) and simultaneously examine predictors at each level (e.g., student-level characteristics as well as classroom-level characteristics).

This course provides an introduction to multilevel modeling, with a focus on its application within the social, education, health, and business sciences. Participants will learn fundamental statistical principles underlying multilevel modeling, a variety of techniques and methods that can be used in many different research contexts, and how to appropriately specify models and interpret results in practice.

In this course, you will learn about the underlying principles and the practical applications of multilevel modeling. The topics covered include:

  • Review of single-level regression
  • Overview of nested data structures and methods to accommodate them
  • Distinguishing between fixed and random effects
  • Fitting and interpreting random intercept and random slope models
  • Centering choices and implications for model results
  • Model specification, estimation, and evaluation
  • Conducting multivariate tests
  • Engaging in model selection
  • Conducting power analyses and determining appropriate sample size
  • Longitudinal models and alternative error structures
  • Three-level (and higher-level) models
  • Cross-classified models

The course will focus on multilevel modeling as a framework that can be applied using a variety of software, rather than focusing exclusively on a single one. Because this is a hands-on course, learners are encouraged to bring a laptop to class with a copy of R installed, along with the following packages: lme4, nlme, and lmerTest. Most of provided materials and examples will involve R code, but time with also be devoted to discussing how the same analyses can be implemented in other software, such as SPSS, SAS, STATA, MPlus, and MLwiN.

This course will be helpful for researchers in any field —including psychology, sociology, education, business, human development, social work, public health, communication and others that rely on social science methodology —who want to understand learn how to apply multilevel models to their research with widely used software. Learners will ideally be comfortable with multiple linear regression analysis, though this topic will be briefly reviewed at the beginning of the workshop. Participants will also ideally have some familiarity with running analyses using some type of statistical software (e.g., R, SPSS, SAS, STATA), but proficiency with any software will not be assumed.

Upon completing this course, you will

  • be able to extend the basic concepts of multiple linear regression analysis in single-level data contexts to multilevel modeling in nested data contexts.
  • understand the motivation behind multilevel modeling, when it is appropriate to use in practice, and how it relates to alternative approaches for accommodating nested data structures.
  • know how to make informed choices when specifying and evaluating a model, or a series of models, in practice
  • be able to implement multilevel modeling in a wide variety of data contexts, including cross-sectional and longitudinal data, data with two-level vs. higher-level structures, and data with purely hierarchical vs. cross-classified nesting 
  • understand the basic ideas behind more advanced techniques (e.g., multilevel structural equation modeling) that extend the standard multilevel modeling framework
Jason Rights, Ph.D.

Instructor: Jason Rights, Ph.D

University of British Columbia

Courses taught for CCRAM:

Introduction to Multilevel Modeling

Dr. Rights received a Ph.D. in Quantitative Methods from the Department of Psychology and Human Development at Vanderbilt University.  He is currently an Assistant Professor of Quantitative Methods in the Department of Psychology at the University of British Columbia. His primary research focus is on addressing methodological complexities and developing statistical methods for multilevel/hierarchical data contexts (e.g., patients nested within clinicians, students nested within schools, or repeated measures nested within individuals). Specifically, he has recently been involved in several lines of research: (1) developing R-squared measures and methods for multilevel models; (2) addressing unappreciated consequences of conflating level-specific effects in analysis of multilevel data; (3) delineating relationships between multilevel models and other commonly used models, such as mixture models; and (4) advancing model selection and comparison methods for latent variable models. To aid researchers in applying his work, he develops software, primarily in R, that is openly available for public use.

Rights, J.D., & Sterba, S.K. (in press). R-squared measures for multilevel models with three or more levels. Multivariate Behavioral Research.

Rights, J.D., & Sterba, S.K. (2021). Effect size measures for longitudinal growth analyses: Extending a framework of multilevel model R-squareds to accommodate heteroscedasticity, autocorrelation, nonlinearity, and alternative centering strategies. New Directions for Child and Adolescent Development (Special Issue: Developmental Methods), 175, 65-110.

Rights, J.D., & Sterba, S.K. (2020). New recommendations on the use of R-squared differences in multilevel model comparisons. Multivariate Behavioral Research55, 568-599.

Rights, J.D., Preacher, K.J., & Cole, D.A. (2020). The danger of conflating level-specific effects of control variables when primary interest lies in level-2 effects. British Journal of Mathematical and Statistical Psychology, 73, 194-211.

Cole, D.A., Lu, R., Rights, J.D., Mick, C.R., Lubarsky, S.R., Gabruk, M.E., Lovette, A.J., Zhang, Y., Ford, M.A., Nick, E.A. (2020). Emotional and cognitive reactivity: Validating a multilevel modeling approach to daily diary data. Psychological Assessment, 32, 431-441.

Rights, J.D., & Cole, D.A. (2018). Effect size measures for multilevel models in clinical child and adolescent research: New R-squared methods and recommendations. Journal of Clinical Child & Adolescent Psychology, 47, 863-873.

Rights, J.D., & Sterba, S.K. (2019). Quantifying explained variance in multilevel models: An integrative framework for defining R-squared measures. Psychological Methods, 24, 309-338.       

Rights, J.D., Sterba, S.K., Cho, S.-J., & Preacher, K.J. (2018). Addressing model uncertainty in item response theory person scores through model averaging. Behaviormetrika45, 495-503.

Rights, J.D., & Sterba, S.K. (2018). A framework of R-squared measures for single-level and multilevel regression mixture models. Psychological Methods, 23, 434-457.

Sterba, S.K., & Rights, J.D. (2017). Effects of parceling on model selection: Parcel-allocation variability in model ranking. Psychological Methods, 22, 47-68.

Rights, J.D., & Sterba, S.K. (2016). The relationship between multilevel models and nonparametric multilevel mixture models: Discrete approximation of intraclass correlation, random coefficient distributions, and residual heteroscedasticity. British Journal of Mathematical and Statistical Psychology69, 316-343.

Sterba, S.K., & Rights, J.D. (2016). Accounting for parcel-allocation variability in practice: Combining sources of uncertainty and choosing the number of allocations. Multivariate Behavioral Research, 51, 296-313.


Doing Open and Replicable Science

Transparency and replicability are cornerstones of science. In 2015, a landmark study showed that only 39% of published studies are replicable by independent teams of researchers. Since then, there have been major advances in doing more open and replicable science. Transparency in study planning, analytical codes, and research materials facilitate independent verifications of a study. Improving replicability of research means that we can have stronger confidence in making decisions based on empirical findings. This short course will cover how to do open and replicable science.

In this course, you will learn how to apply open and replicable scientific practices to develop a new quantitative study.

  • Overview of the replication crisis
  • Overview of the importance of open science
  • Overview of the properties of a replicable study
  • Evaluation of openness and replicability in past studies
  • Preregistration for experimental study
  • Preregistration for observational study
  • Open data
  • Open materials
  • Scientific reporting of an open and replicable study

The course will focus on using the Open Science Framework as a free platform to implement open science practices. Learners are encouraged to bring a laptop to class to complete hands-on exercises.

This course will be helpful for researchers in any field —including psychology, sociology, education, business, human development, social work, public health, communication and others that rely on social science methodology —who want to develop a transparent and replicable research program. Learners should have background knowledge in introductory statistics topics such as univariate statistical tests, descriptive statistics, and null hypothesis significance testing. Though proficiency in a specific software isn’t required, ideally participants will have some familiarity with running analyses using some type of statistical software (e.g., R, SPSS, SAS, STATA). Learners will especially benefit from the course if they are planning a new study, and participants are welcome to complete some of the hands-on exercises in the context of their research topic.

Upon completing this course, you will

  • Be able to understand and describe the replication crisis and its causes and consequences
  • Define open science
  • Understand what makes a study open and replicable
  • Evaluate past studies on their openness and replicability
  • Use pre-registration to specify your data collection and analytical plan
  • Complete a time-stamped and verifiable pre-registration
  • Understand the ethical considerations of open data and materials
  • Learn and complete steps to make study data and materials open
  • Learn how to prepare a scientific report based on open science principles
Felix Cheung, Ph.D.

Instructor: Felix Cheung, Ph.D.

University of Toronto

Courses taught for CCRAM:

Doing Open and Replicable Science

Dr. Cheung received his Ph.D. in Social and Personality Psychology at Michigan State University. He is currently an Assistant Professor at the University of Toronto in the Department of Psychology. Dr. Cheung has two main lines of research. The first line of research examines the determinants and consequences of subjective well-being across diverse populations, with a focus on addressing pressing global issues (e.g., sociopolitical unrest, income inequality, and terrorism). His second line of research focuses on meta-science (the scientific study of science) and examines how the reliability of scientific findings can be potentially improved by 'Big Science' (i.e., studies done by large collaborative teams), open science practices (e.g., pre-registration and data sharing), and research incentives. Together, these two lines of research seek to promote population well-being based on sound empirical research.

Landy, J. F., Jia, M., Ding, I. L., Viganola, D., Tierney, W., Dreber, A…, Cheung, F., ... Uhlmann, E. L. (2020). Crowdsourcing hypothesis tests: Making transparent how design choices shape research results. Psychological Bulletin, 146(5), 451-479.

Silberzahn, R., Uhlmann, E.L., Martin, D.P., Anselmi, P., Aust, F., Awtrey, E., Cheung, F., … Nosek, B. A. (2018). Many analysts, one dataset: Making transparent how variations in analytical choices affect results. Advances in Methods and Practices in Psychological Science, 1, 337-356.

Anderson, C. J., Bahník, Š., Barnett-Cowan, M., Bosco, F. A., Chandler, J., Chartier, C. R.,  … Cheung, F., …, & Zuni, K. (2016) Response to a comment on “Estimating the reproducibility of psychological science”. Science, 351(6277), 1037

Schweinsberg, M., Madan, N., Vianello, M., Sommer, S. A., Jordan, J., Tierney, W., Awtrey, E., Zhu, L., … Cheung, F., … , & Uhlmann, E. L. (2016). The pipeline project: Pre-publication independent replications of a single laboratory's research pipeline. Journal of Experimental Social Psychology, 66, 55-67.

Tierney, W., Schweinsberg, M., Jordan, J., Kennedy, D. M., Qureshi, I., Sommer, S.A., … Cheung, F., …, & Uhlmann, E. L. (2016). Data from a pre-publication independent replication initiative: The pipeline project. Scientific Data, 3, 160082.

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251).

Johnson, D.J., Cheung, F., & Donnellan, M.B. (2014). Hunting for artifacts: The perils of dismissing inconsistent replication results. Social Psychology, 45, 318-320.


Structural Equation Modeling Done Right

The technique of structural equation modeling (SEM) is widely used in many disciplines including psychology, education, communication, biology, medicine, and others. Unfortunately, many—and possibly most—published SEM studies have at least one flaw so severe that it compromises the scientific merit of the work. This is because there are certain poor practices in SEM that are relatively common, some of which are maintained by statistical myths about the conduct of SEM or about the interpretation of analysis results. These problems are compounded by widespread deficiencies in reporting apparent in the literature.

The point of this course is to reinforce best practices in SEM and thereby assist participants to avoid common pitfalls and shortcomings in the area. Four topics are emphasized: (1) How to report the results in ways that are transparent, complete, and respect updated reporting standards for SEM studies by the American Psychological Association. (2) How to avoid confirmation bias by directly addressing the phenomenon of equivalent models that fit the data just as well as the researcher’s target model but with contradictory hypotheses about causation. (3) How to properly and thoroughly evaluate model fit, a critical part of deciding whether to retain or to reject a model. (4) Preregistration of the analysis plan is also described as a best practice when a more exploratory phase of the analysis is expected.

In this course, you will learn about how to follow the best practices in SEM as just summarized. Course topics include

  • Review of the content in APA reporting standards for SEM studies
  • The proper role of significance testing versus model indexing in evaluating global model fit
  • Identification of myths about model fit statistics, especially about thresholds of approximate fit indexes that supposedly signal “good” model fit
  • The role of evidence for local model fit, ignored in too many studies, in the form of residuals is explained
  • Types of residuals are defined, including covariance, correlation, standardized, and normalized residuals
  • How to generate equivalent structural or measurement models is described
  • How to plan, organize, and describe the analysis plan—including preregistration of that plan—in clear and transparent ways

Best practices covered in this course do not rely on the use of any particular computer tool or software package for SEM. Instead, the concepts and skills are those that any researcher should know or have mastered regardless of whether they use Mplus, lavaan, LISREL, Amos, or other any computer program. Thus, the course is about ideas, not about computer skills.

This course about best practices should benefit a range of participants, from researchers-in-training, such as graduate students, up thorough current researchers more experienced with SEM and who seek to upgrade their knowledge. The overall goal is to help participants distinguish their work whether submitted as a thesis or dissertation to a research committee or manuscripts with SEM analyses submitted to journals. Participants should have some prior exposure to SEM, such as in a course or through its application in research projects. The best practices covered in the course do not require expert-level knowledge of SEM. By the end of course participants will have learned some key ways to improve their future applications of SEM.

Upon completing this course, you will

  • Understand the contents of reporting standards for SEM studies including the need to describe both global fit and local fit, or the residuals, in written summaries
  • Know how to interpret residuals of different types, including covariance, correlation, standardized, or normalized residuals
  • Avoid common false interpretations of global fit statistics, including the model chi-square and approximate fit indexes
  • Understand that failure to directly acknowledge the existence of equivalent or near-equivalent models is a form of confirmation bias
  • Be able to generate for your readers at least a few equivalent models and appreciate that rational argument, not statistical analysis, is the only way to prefer one equivalent model over another
  • Understand the role of preregistration as way to reduce hypothesizing after the results are known, or harking, which is the undisclosed presentation of exploratory analyses as though they were confirmatory
Rex Kline, Ph.D.

Instructor: Rex Kline, Ph.D.

Concordia University

Courses taught for CCRAM:

Structural Equation Modeling Done Right

Dr. Kline received a PhD in Clinical Psychology with a doctoral minor in Statistics and Measurement from Wayne State University in Detroit, Michigan. He is currently a Professor in the Department of Psychology at Concordia University in Montréal, Québec, Canada. He has conducted research on the psychometric evaluation of cognitive abilities, behavioral and scholastic assessment of children, structural equation modeling, training of researchers, statistics reform in the behavioral sciences, and usability engineering in computer science. Dr. Kline is the author of Principles and Practice of Structural Equation Modeling, which through four editions (1998, 2005, 2011, 2016) has been one of the widely cited introductory-level text books in the area. The fifth edition is forthcoming soon. Recently, Dr. Kline was a member of the Publications and Communications Board Task Force of the American Psychological Association that revised journal article reporting standards for quantitative studies and introduced updated reporting standards for SEM studies.

Books

Kline, R. B. (2020). Becoming a behavioral science researcher: A guide to producing research that matters (2nd ed.). New York: Guilford Press.

Kline, R. B. (2019). 구조방정식모형. (Principles and Practice of Structural Equation Modeling, 4th ed., Korean trans.). Seoul, Korea: Hakjisa Publisher.

Kline, R. B. (2016). Principles and practice of structural equation modeling (4th ed.). New York: Guilford Press.

Kline, R. B. (2013). Beyond significance testing: Statistics reform in the behavioral sciences (2nd ed.). Washington, DC: American Psychological Association.

Standards

Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., & Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: The APA Publications and Communications Board Task Force report. American Psychologist, 73, 3–25. https://doi.org/10.1037/amp0000191

Articles

Kline, R. B. (in press). Post p-value education in graduate statistics: Preparing tomorrow’s psychology researchers for a post-crisis future. Canadian Psychology.

Zhang, M. F., Dawson, J., & Kline, R. B. (in press). Evaluating the use of covariance-based structural equation modelling with reflective measurement in organisational and management research: A review and recommendations for best practice. British Journal of Management.

Sauvé, G., Kline, R. B., Shah, J. L., Joober, R., Malla, A., Brodeur, M. B., & Lepage, M. (2019). Cognitive capacity similarly predicts insight into symptoms in first- and multiple-episode psychosis. Schizophrenia Research, 206, 236–243. https://doi.org/10.1016/j.schres.2018.11.013

Nicolakakis, N., Stock, S. R., Abrahamowicz, M., Kline, R., & Messing, K. (2017). Relations between work and upper extremity musculoskeletal problems (UEMSP) and the moderating role of psychosocial work factors on the relation between computer work and UEMSP. International Archives of Occupational and Environmental Health, 90, 751–764. https://doi.org/10.1007/s00420-017-1236-9

Goodboy, A. K., & Kline, R. B. (2017). Statistical and practical concerns with published communication research featuring structural equation modeling. Communication Research Reports, 34, 1–10. https://doi.org/10.1080/08824096.2016.1214121

Kline, R. B. (2015). The mediation myth. Basic and Applied Social Psychology, 37, 202–213. https://doi.org/10.1080/01973533.2015.1049349

Chapters

Kline, R. B. (in press). Structural equation modeling. In R. Tierney, F. Rizvi, K. Ercikan, & G. Smith (Eds.), International encyclopedia of education (4th ed.). Oxford, United Kingdom: Elsevier.

Kline, R. B. (in press). Structural equation modeling in neuropsychology research. In G. Brown, B. Crosson, K. Haaland, & T. King (Eds.), APA handbook of neuropsychology. Washington DC: American Psychological Association.

Kline, R. B. (in press). Psychometrics. In P. Atkinson, S. Delamont, M. Hardy, & M. Williams (Eds.), Encyclopaedia of Social Research Methods (2nd ed.). Thousand Oaks, CA: Sage.

Kline, R. B. (2017). Mediation analysis in leadership studies: New developments and perspectives. In B. Schyns, R. J. Hall, & P. Neves (Eds.), Handbook of methods in leadership research (pp. 173–194). Northhampton, MA: Elgar.

Kline, R. B. (2015). Path models. In D. S. Dunn (Ed.), Oxford bibliographies in psychology. New York: Oxford University Press.

Kline, R. B. (2013). Reverse arrow dynamics: Feedback loops and formative measurement. In G. R. Hancock and R. O. Mueller, (Eds.), Structural equation modeling: A second course (2nd ed.) (pp. 39–76). Greenwich, CT: Information Age Publishing.

Kline, R. B. (2013). Exploratory and confirmatory factor analysis. In Y. Petscher & C. Schatschneider (Eds.), Applied quantitative analysis in the social sciences (pp. 171-207). New York: Routledge.

Colloquia

Mediation analysis in cross-sectional designs. La Société Statistique de Montréal (SSM) et Collectif pour le développement et les applications en mesure at évaluation de la Faculté des sciences de l’éducation de l’UQÀM, March 16, 2018.

New developments in mediation analysis. SSM et Collectif pour le développement et les applications en mesure at évaluation de la Faculté des sciences de l’éducation de l’UQÀM, November 25, 2016.

Living statistics reform. SSM et Collectif pour le développement et les applications en mesure at évaluation de la Faculté des sciences de l’éducation de l’UQÀM, March 24, 2016.

Becoming a behavioral science researcher. Southwest Educational Research Association, Presidential invited address, February 10, 2016.

Hello, statistics reform. Nebraska Academy for Methodology, Analytics and Psychometrics, University of Lincoln–Nebraska, Nov 10, 2015; School of Psychology, University of Ottawa, Sept 25, 2014; Department of Psychology, Concordia University, Sept 26, 2013.

New developments in structural equation modeling. Methodology, Analytics & Psychometrics Academy, University of Nebraska–Lincoln. Nov 10, 2014.

Seminars

Advanced topics in structural equation modeling. Quebec Inter-University Centre for Social and Statistics (QICSS), Montréal, May 13–15, 2019; April 25–27, 2016; April 27–29, 2015; May 12–14, 2014; May 8–19, 2013; May 14–16, 2012.

Introduction to structural equation modeling. QICSS, Montréal, May 6–10, 2019; May 14−18, 2018; April 17–21, 2017; April 18–22, 2016; April 20–24, 2015; April 28–May 2, 2014; April 22–26, 2013; February 20-24, 2012; May 2–6, 2011; February 21–25, 2011; May 17–24, 2010; February 22–26, 2010; May 25–29, 2009; June 9–13, 2008; December 1–5, 2008; May 22–25, 2007.

Structural equation modeling. Istanbul Quantitative Lectures, School of Business, Istanbul University, July 6–11, 2015; August 25–31, 2014; July 1–12, 2013.

Introduction to structural equation modeling. Portland State University, Summer Quantitative Methods Series, Portland, OR, June 16–17, 2014; June 15–16, 2012.

Structural equation modeling. Axe Santé des populations et pratiques optimales en santé, Centre de recherche du CHU de Québec, Université Laval, October 28-29, 2013.

Structural equation modeling. Ted Rogers School of Management, Ryerson University, May 13–14, 2013.


Meta-Analysis

One of the greatest scientific challenges and opportunities in the information age is making sense and making use of a vast sea of scholarly findings. To address this need, we increasingly rely on meta-analysis, a form of systematic review with statistical roots in multilevel modeling (MLM). With the exponential growth in publication rates, these quantitative summaries are among the most cited and valued articles today. Unfortunately, they are hard to do as much as they are valuable. The process requires combing through the literature, gathering the empirical results pertaining to a finding, transforming these to a common metric, analyzing the data and presenting the results clearly and transparently. Traditionally, a standard meta-analysis takes a four-person team over 67 weeks to conduct.

This course provides both a theoretical introduction to meta-analysis along side practical advice, with emphasis on the latter. Participants will learn how to reduce coding time by ten-fold by using the latest techniques and resources, including integrated online platforms that draw on optical character recognition, machine learning, and automatic error detection. We will replicate and critique previous tier-1 meta-analyses in real time with the option to expand and co-author for later publication. The goal of the course is to enable participants to readily publish their own competitive project.

In this course, you will learn about the underlying principles and the practical applications of meta-analysis. The topics covered include:

  • Determining a competitive topic for meta-analytic review
  • Refining the topic to ensure it is manageable and relevant
  • Establish and train a research team
  • Define and conduct your literature search iteratively
  • Deduplication and title/abstract screening
  • Full-text acquisition and screening
  • Dealing with incomplete and foreign language articles
  • Establishing potential moderators
  • Creating a taxonomy and connecting measures to constructs
  • Converting effect sizes to a common metric
  • The data entry process
  • Dealing with dependent effect sizes and time series data
  • Psychometric corrections for meta-analysis
  • Kappa versus error detection
  • Meta-analytic models and their weighting schemes
  • Outlier analysis
  • Publication bias analysis
  • Fundamental meta-analytic results
  • Cross-lagged time series meta-analyses
  • Meta-regression and moderator analysis
  • Meta-analytic structural equal modeling (MASEM)
  • Integrating other research designs
  • Open Science reporting

The course will focus on practical meta-analysis, with focus on data filtering, data entry and analyses in HubMeta and R, though referencing options in other software (e.g., SPSS). This is a hands-on course, so learners are encouraged to bring a laptop to class with a copy of R installed. Most of the provided statistical materials and examples will involve R code.

This course will be helpful for both introductory and advanced researchers in any field who are focusing on mean based or correlation based meta-analysis. Experimental meta-analysis, though related, will be addressed only indirectly. Consequently, this course is relevant to researchers from most fields (e.g., psychology, management, sociology, education, human development, social work, public health, communication). Learners will ideally be comfortable with introductory statistics, including regression, and be able to read as well as understand the method section of basic articles in their field (i.e., necessary for data entry). Proficiency in R is desirable but not necessary.

Upon completing this course, you will:

  • be able to produce a competitive tier-1 meta-analytic dataset, method section and framework
  • tackle meta-analytic topics an order of magnitude larger than previously plausible as well as in a fraction of the time, including comprehensive meta-analytic correlation matrices
  • improve your management of a dispersed international meta-analytic research team
  • address journals’ theory requirements through meta-regression, cross lagged meta-analysis and MASEM
  • understand the fundamentals of meta-analysis as well as the basic ideas behind more advanced techniques (e.g., One Stage MASEM)
Piers Steel, Ph.D.

Instructor: Piers Steel, Ph.D.

University of Calgary

Courses taught for CCRAM:

Meta-Analysis

Piers Steel received his Ph.D. in Industrial and Organization Psychology from the University of Minnesota. He is Professor and the Brookfield Research Chair at the Haskayne School of Business at the University of Calgary. Piers’ particular areas of research interest include culture, motivation and decision-making, and he also has expertise in systematic review and meta-analysis and is a member of the Society of Research Synthesis and Methodology. He has published several methodology papers on how to improve meta-analysis and is a cofounder of the online meta-analytic platforms HubMeta and metaBUS. Piers’ work has appeared in such places as the Journal of Personality and Social Psychology, Psychological Bulletin, and Personality and Social Psychology Review, Journal of Applied Psychology, Personnel Psychology and Academy of Management Review, among others. He is a fellow of the American Psychological Association, the Society of Industrial Organizational Psychology, and the American Psychological Society. His meta-analytic work has been reported globally in thousands of news articles and produced one best selling book.

Ogunfowora, B., Nguyen, V. Q., Steel, P., & Hwang, C. C. (2021). A meta-analytic investigation of the antecedents, theoretical correlates, and consequences of moral disengagement at work. Journal of Applied Psychology

Steel, P., Beugelsdijk, S., & Aguinis, H. (2021). The anatomy of an award-winning meta-analysis: Recommendations for authors, reviewers, and readers of meta-analytic reviews. Journal of International Business Studies, 52, 23-44

Steel, P., Schmidt, J., Bosco, F., & Uggerslev, K. (2019). The effects of personality on job satisfaction and life satisfaction: A meta-analytic investigation accounting for bandwidth-fidelity and commensurability. Human Relations, 72, 217–247

Doucouliagos, C., Stanley, T. & Steel, P. (2018). Does ICT generate economic growth? A meta-regression analysis. Journal of Economic Surveys, 32, 705-726

Zeng, R., Grogaard, B., & Steel, P. (2018). Complements or substitutes? A meta-analysis of the role of integration mechanisms in knowledge transfer in the MNE Network. Journal of World Business, 53, 415-432

Steel, P., Taras, V., Uggerslev, K., & Bosco, F. (2018). The happy culture: A meta-analytic review and empirical investigation of culture’s relationship with subjective wellbeing. Personality and Social Psychology Review, 22, 128-169

Lee, C., Bosco, F., Steel, P., & Uggerslev, K. (2017). A metaBUS enabled meta-analysis of career satisfaction. Career Development International, 22, 565-582.

Simmons, S., Caird, J. & Steel, P. (2017). A meta-analysis of in-vehicle and nomadic voice recognition system interaction and driving performance. Accident Analysis and Prevention, 106, 21-43

Bosco, F., Uggerslev, K., & Steel, P. (2017). metaBUS as a vehicle for facilitating meta-analysis. Human Resource Management Review, 27, 237-254

Paterson, T. A., Harms, P. D., Steel, P. & Credé, M. (2016). An assessment of the magnitude of effect sizes: Evidence from 30 years of meta-analysis in management. Journal of Leadership and Organizational Studies, 23, 66-81

Bosco, F., Steel, P., Oswald, F. L., Uggerslev, K., & Field, J. G. (2015). Cloud-based meta-analysis to bridge science and practice: Welcome to metaBUS. Personnel Assessment and Decisions, 1. Article 2.

Steel, P., Kammeyer-Mueller, J., & Paterson, T. (2015). Improving the meta-analytic assessment of effect size variance with an informed Bayesian prior. Journal of Management, 41, 718-743.

Caird, J. K., Johnston, K. A., Willness, C. R., Asbridge, M., & Steel, P. (2014). A meta-analysis of the effects of texting on driving. Accident Analysis & Prevention, 71, 311-318

Merkin, R., Taras, V., & Steel, P. (2014). State of the art themes in cross-cultural communication research: A meta-analytic review. International Journal of Intercultural Relations, 38, 1-23

Liu, X., Vredenburg, H. & Steel, P. (2014). A meta-analysis of factors leading to management control in international joint ventures. Journal of International Management, 20, 219-236

Taras, V., Steel, P., & Kirkman, B. (2012). Improving national cultural indices using a longitudinal meta-analysis of Hofstede's dimensions. Journal of World Business, 47, 329-334

Steel, P., & Taras, V. (2010). Culture as a consequence: A multilevel multivariate meta-analysis of the effects of individual and country characteristics on work-related cultural values. Journal of International Management, 16, 211-233

Kammeyer-Mueller, J., Steel, P., & Rubenstein, A. (2010). The other side of method bias: The perils of distinct source research designs. Multivariate Behavioral Research, 45, 294-321

Bowen, F., Rostami, M., & Steel, P. (2010). Timing is everything: A meta-analysis of the relationships between organizational performance and innovation. Journal of Business Research, 63, 1179–1185

Steel, P., & Kammeyer-Mueller, J. (2009). Using a meta-analytic perspective to enhance Job Component Validation. Personnel Psychology, 62, 533–552

Caird, J., Willness, C. R., Steel, P., & Scialfa, C. (2008). A meta-analysis of the effects of cell phones on driver performance. Accident Analysis & Prevention, 40, 1282-1293.

Steel, P., & Kammeyer-Mueller, J. (2008). Bayesian variance estimation for meta-analysis: Quantifying our uncertainty. Organizational Research Methods, 11, 54-78

Steel, P. (2007). The nature of procrastination: A meta-analytic and theoretical review of quintessential self-regulatory failure. Psychological Bulletin, 133, 65-94.

Willness, C., Steel, P., & Lee, K. (2007). A meta-analysis of the antecedents and consequences of workplace sexual harassment. Personnel Psychology, 60, 127-162.

Steel, P. & Kammeyer-Mueller, J. (2002). Comparing meta-analytic moderator search techniques under realistic conditions. Journal of Applied Psychology, 87, 96-111.


Mediation, Moderation, and Conditional Process Analysis

Statistical mediation and moderation analyses are among the most widely used data analysis techniques in social science, health and business research. Mediation analysis is used to test hypotheses about various intervening mechanisms by which causal effects operate. Moderation analysis is used to examine and explore questions about the contingencies or conditions of an effect, also called “interaction.”  Increasingly, moderation and mediation are being integrated analytically in the form of what has become known as “conditional process analysis,” used when the goal is to understand the contingencies or conditions under which mechanisms operate. An understanding of the fundamentals of mediation and moderation analysis is in the job description of almost any empirical scholar. In this course, you will learn about the underlying principles and the practical applications of these methods using ordinary least squares (OLS) regression analysis and the PROCESS macro for SPSS, SAS and R, invented by the course instructor and widely used in the behavioral sciences. This course is a companion to the instructor’s book Introduction to Mediation, Moderation, and Conditional Process Analysis, published by The Guilford Press. A copy of the book is not required to benefit from the course, but it could be helpful to reinforce understanding.

In this course, you will learn about the underlying principles and the practical applications of mediation, moderation and conditional process analysis. It covers six broad topics:

  1. Direct, indirect, and total effects in a mediation model
  2. Estimation and inference in single mediator models using ordinary least squares regression
  3. Estimation and inference in mediation models with more than one mediator
  4. Moderation or “interaction” in ordinary least squares regression
  5. Testing, interpreting, probing, and visualizing interactions
  6. The integration of mediation and moderation: Conditional process analysis

Computer applications will focus on the use of ordinary least squares regression and the PROCESS macro for SPSS, SAS and R, developed by the instructor, that makes the analyses described in this class much easier than they otherwise would be. This is a hands-on course, so maximum benefit results when learners can follow along with analyses using a laptop or desktop computer with a recent version of SPSS Statistics (version 23 or later), SAS (release 9.2 or later, with PROC IML installed) or R (version 3.6 or later; base module only. No packages are used in this course). Learners can choose which statistical package they prefer to use. STATA users can benefit from the course content, but PROCESS makes these analyses much easier and is not available for STATA.

This course will be helpful for researchers in any field – including psychology, sociology, education, business, human development, social work, public health, communication and others that rely on social science methodology – who want to learn how to apply the methods of moderation, mediation, and conditional process analysis using widely-used software such as SPSS, SAS and R.

Learners are recommended to have familiarity with the practice of multiple regression analysis and elementary statistical inference. No knowledge of matrix algebra is required or assumed, nor is matrix algebra used in the delivery of course content. Learners should also have some experience with the use of SPSS, SAS or R, including opening and executing data files and programs.

Upon completing this course, you will be able to:

  • statistically partition one variable’s effect on another into its primary pathways of influence, direct and indirect
  • understand modern approaches to inference about indirect effects in mediation models
  • test competing theories of mechanisms statistically through the comparison of indirect effects in models with multiple mediators
  • estimate and interpret mediation models with mediators operating in serial
  • estimate and interpret relative direct, indirect and total effects in a mediation model with a multi-categorical (more than 2 groups) independent variable
  • understand how to build flexibility into a regression model that allows a variable’s effect to be a function of another variable in a model
  • visualize and probe interactions in regression models (e.g. using the simple slopes/spotlight analysis and Johnson-Neyman/floodlight analysis approaches)
  • test, visualize, probe and interpret moderation in a model with a multi-categorical independent variable or moderator
  • integrate models involving moderation and mediation into a conditional process model
  • estimate the contingencies of mechanisms through the computation and inference about conditional indirect effects
  • determine whether a mechanism is dependent on a moderator variable
  • conduct a conditional process analysis with models with more than one mediator (serial and parallel)
  • understand the concept of differential dominance and appreciate its value in theory and research
  • conduct a conditional process analysis with a multi-categorical independent variable
  • apply the methods discussed in this course using the PROCESS procedure for SPSS, SAS and R
  • talk and write in an informed way about the mechanisms and contingencies of causal effects

In this course, we focus primarily on research designs that are experimental or cross-sectional in nature with continuous outcomes. We do not cover complex models involving dichotomous outcomes, latent variables, nested data (i.e., multilevel models) or the use of structural equation modeling.

Andrew F. Hayes, Ph.D.

Instructor: Andrew F. Hayes, Ph.D.

University of Calgary CCRAM Academic Director

Courses taught for CCRAM:

Mediation, Moderation, and Conditional Process Analysis

Dr. Hayes received his Ph.D. in Social Psychology from Cornell University. Practicing primarily as a quantitative methodologist, he is currently a Distinguished Research Professor at the Haskayne School of Business at the University of Calgary with an adjunct appointment in the Department of Psychology. He is the author of Introduction to Mediation, Moderation, and Conditional Process Analysis (2022) and Regression Analysis and Linear Models (2017), both published by The Guilford Press, and Statistical Methods for Communication Science (2005), published by Routledge. He also invented the PROCESS macro for SPSS, SAS and R, widely used by researchers examining the mechanisms and contingencies of effects. He teaches courses on applied data analysis and also conducts online and in-person workshops on statistical analysis to multidisciplinary audiences throughout the world, most frequently to faculty and graduate students in business schools but also in education, psychology, social work, communication, public health and government researchers. His work has been cited well over 140,000 times according to Google Scholar and he has been designated a Highly Cited Researcher by Clarivate Analytics in 2019, 2020, and 2021.

Hayes, A. F. (2022). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach (3rd edition). New York: The Guilford Press.

Igartua, J.-J., & Hayes, A. F. (2021). Mediation, moderation, and conditional process analysis: Concepts, computations, and some common confusions. Spanish Journal of Psychology, 24, e49.

Hayes, A. F., & Rockwood, N. J. (2020). Conditional process analysis: Concepts, computations, and advances in the modeling of the contingencies of mechanisms. American Behavioral Scientist, 64, 19-54.

Coutts, J. J., Hayes, A. F., & Jiang, T. (2019). Easy statistical mediation analysis with distinguishable dyadic data. Journal of Communication, 69, 612-649.

Hayes, A. F. (2018). Partial, conditional, and moderated moderated mediation: Quantification, inference, and interpretation. Communication Monographs, 85, 4-40.

Darlington, R. B., & Hayes, A. F. (2017). Regression analysis and linear models: Concepts, applications, and implementation. New York: The Guilford Press.

Hayes, A. F., & Rockwood, N. J. (2017). Regression-based statistical mediation and moderation analysis in clinical research: Observations, recommendations, and implementation. Behaviour Research and Therapy, 98, 39-57.

Hayes, A. F., Montoya, A. K., & Rockwood, N. J. (2017). The analysis of mechanisms and their contingencies: PROCESS versus structural equation modeling. Australasian Marketing Journal, 25, 76-81.

Hayes, A. F., & Montoya, A. K. (2017). A tutorial on testing, visualizing, and probing interaction involving a multicategorical variable in linear regression analysis. Communication Methods, and Measures, 11, 1-30.

Montoya, A. K., & Hayes, A. F. (2017). Two condition within-participant statistical mediation analysis: A path-analytic framework. Psychological Methods, 22, 6-27.

Hayes, A. F. (2015). An index and test of linear moderated mediation. Multivariate Behavioral Research, 50, 1-22.

Hayes, A. F. (2014). Statistical mediation analysis with a multicategorical independent variable. British Journal of Mathematical and Statistical Psychology, 67, 451-470.

Hayes, A. F., & Scharkow, M. (2013). The relative trustworthiness of inferential tests of the indirect effect in statistical mediation analysis: Does method really matter? Psychological Science, 24, 1918-1927.

Hayes, A. F., & Preacher, K. J. (2010). Estimating and testing indirect effects in simple mediation models with the constituent paths are nonlinear. Multivariate Behavioral Research, 45, 627-660.

Hayes, A. F. (2009). Beyond Baron and Kenny: Statistical mediation analysis in the new millennium. Communication Monographs, 76, 408-420.

Preacher, K. J., & Hayes, A. F. (2008). Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models. Behavior Research Methods, 40, 879-891.

Want more information on the Canadian Centre for Research Analysis and Methods? CCRAM Website