Modeling Heterogeneity of Response Processes in Item Response Theory

Mirka Henninger (University of Zurich), Esther Ulitzsch (IPN Kiel), Thorsten Meiser (University of Mannheim)

This workshop provides an overview of basic and advanced models of Item Response Theory (IRT) for cognitive and personality assessment. The course starts with an introduction into IRT models for dichotomous and ordinal items (e.g., Rasch model, 2PL and 3PL model, generalized partial credit model, graded response model), including parameter estimation and model testing. The initial introduction guarantees that all participants have the required knowledge of the essential models, concepts and techniques for the more advanced topics.

Building on the introduction into general IRT, various extensions of IRT models will be presented that allow researchers to analyze heterogeneity in latent response processes over individuals and/or items. The extensions include multidimensional and mixture IRT models that allow modeling individual response styles in personality assessment, incorporating process data information, accommodating disengaged responding in cognitive measures, and analyzing latent processes underlying fast or missing responses. Among the advanced topics, the workshop will also cover Bayesian estimation procedures and extensions to IRT modeling from the area of machine learning. Throughout the workshop, the models will be presented with their theoretical and statistical foundations and illustrated with real data. Model specification and estimation will be demonstrated with various R packages and practiced with supervised hands-on exercises.

Prerequisite knowledge and skills: Participants are expected to have basic experience in the use of R and should bring their own notebook for the practical exercises. Required R packages will be installed during the workshop.

Literature:

We do not expect that you prepare readings in advance. Specific readings will be provided in class. Much of the workshop will be based on the following articles and books.

  • Böckenholt, U., & Meiser, T. (2017). Response style analysis with threshold and multi-process IRT models: A review and tutorial. British Journal of Mathematical and Statistical Psychology, 70, 159–181. https://doi.org/10.1111/bmsp.12086
  • Debelak, R., Strobl, C., & Zeigenfuse, M. D. (2022). An introduction to the Rasch model with examples in R. Boca Raton, FL: CRC Press.
  • De Boeck, P., & Wilson, M. (2004). Explanatory item response models. New York: Springer.
  • Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, NJ: Lawrence Erlbaum.
  • Henninger, M., Debelak, R., & Strobl, C. (2022). A new stopping criterion for Rasch trees based on the Mantel-Haenszel effect size measure for differential item functioning. Educational and Psychological Measurement, 1–32. https://doi.org/10.1177/00131644221077135
  • Henninger, M., & Meiser, T. (2020). Different approaches to modeling response styles in divide-by-total item response theory models (Part I): A model integration. Psychological Methods, 25, 560–576. https://doi.org/10.1037/met0000249
  • Meiser, T., Plieninger, H., & Henninger, M. (2019). IRTree models with ordinal and multidimensional decision nodes for response styles and trait-based responses. British Journal of Mathematical and Statistical Psychology, 72, 501–516. https://doi.org/10.1111/bmsp.12158
  • Ulitzsch, E., Pohl, S., Khorramdel, L., Kroehne, U., & von Davier, M. (2022). A response-time-based latent response mixture model for identifying and modeling careless and insufficient effort responding in survey data. Psychometrika, 87, 593–619. https://doi.org/10.1007/s11336-021-09817-7
  • Ulitzsch, E., von Davier, M., & Pohl, S. (2020). A hierarchical latent response model for inferences about examinee engagement in terms of guessing and item-level nonresponse. British Journal of Mathematical and Statistical Psychology, 73, 83–112. doi.org/10.1111/bmsp.12188