IOMW 2020 Spotlight Talk: Integrating Natural Language Processing features within explanatory item response models to support score interpretation (Session 4/4) [Online]

Abstract

This paper describes the integration of Natural Language Processing (NLP) features within an explanatory item response modelling framework in the context of a reading comprehension assessment item bank. Item properties derived through NLP algorithms were incorporated into a Rasch Latent Regression Linear Logistic Test Model with item error, extending the model described by Wilson and de Boeck (2004) on the item side with a random error term. Specifically, item difficulties were modelled as random variables that could be predicted (with uncertainty) (Janssen, Schepers and Peres, 2004) by NLP item property fixed effects, and person covariates were included to increase the accuracy of estimation of latent ability distributions. The focus of this study was on the extent to which different kinds of NLP features explained variance in item difficulties. We investigated how these results could be used to develop and validate proficiency level descriptors and item bank meta-data.

Nathan Zoanetti is Research Director, Psychometrics and Methodology, at ACER. His work in educational measurement has spanned many education settings, including school education, medical education, higher education, professional licensure examinations, and language testing. He has specialised knowledge in computer-based interactive assessment design and analysis, and a keen interest in leveraging emerging analysis techniques to enrich contemporary measurement practice. Nathan continues to serve on national and international assessment and measurement advisory groups and committees.

Xiaoliang Zhou is a Research Fellow in ACER’s Psychometrics and Methodology Program. He has expertise in signal detection approaches to hierarchical rater models, diagnostic cognitive modelling, CAT, and application of statistical models to education data. He has four years of experience applying statistical models and skills to the analysis of k-12 education data and other types of data. He has used education data to study standardization and clustering algorithms, applied hierarchical models to model large-scale education data such as ELS:2002, and has used visual techniques in other big data areas to discover STEM (science, technology, engineering, and math) students’ curricula selection and performance. Now, he is working on projects such as explanatory item response modelling and multivariate latent class signal detection theory rater models.

Ling Tan is Senior Research Fellow in the psychometrics and methodology research division at ACER. His research interests include exploration of methodological issues in educational measurement with methods related to Rasch measurement models, item response models, Bayesian networks and applications of these methods to cognitive assessments, multi-stage adaptive tests, and assessment of non-cognitive skills such as student engagement.

Date: 
Tuesday, December 1, 2020 - 2:00pm
Building: 
Online session
Room: 
Zoom
AttachmentSize
PDF icon Presentation slides1007.02 KB