IOMW 2020 Spotlight Talk: Integrating Natural Language Processing features within explanatory item response models to support score interpretation (Session 4/4) [Online]

Abstract
This paper describes the integration of Natural Language Processing (NLP) features within an explanatory item response modelling framework in the context of a reading comprehension assessment item bank. Item properties derived through NLP algorithms were incorporated into a Rasch Latent Regression Linear Logistic Test Model with item error, extending the model described by Wilson and de Boeck (2004) on the item side with a random error term. Specifically, item difficulties were modelled as random variables that could be predicted (with uncertainty) (Janssen, Schepers and Peres, 2004) by NLP item property fixed effects, and person covariates were included to increase the accuracy of estimation of latent ability distributions. The focus of this study was on the extent to which different kinds of NLP features explained variance in item difficulties. We investigated how these results could be used to develop and validate proficiency level descriptors and item bank meta-data.

Date: 
Tuesday, December 1, 2020 - 2:00pm
Building: 
Online session
Room: 
Zoom