Tuesday, February 7, 2023
2:00 - 4:00 PM (PST) at Berkeley Way West 1204 and via Zoom
Open to GSE faculty, students, and community.
Request a zoom link from firstname.lastname@example.org
In this study we explore the relationship between constructed-response (CR) item types and selected-response (SR) item types. We constructed substantively equivalent sets of SR and CR items, designed to assess multiple (high) levels of competency in argumentation using the Construct-Modelling approach, which was based on a previously validated construct map for argumentation. We analyzed data obtained from 303 middle school and high school students who were randomly assigned to the two different assessment conditions (i.e., CR and SR).
Our findings indicate that the two assessment conditions generate two different, but correlated, psychometric dimensions. In particular, the item difficulty parameters from the SR items are highly correlated with those from the paired CR items, indicating that both sets are consistent with the original construct map for argumentation. However, the CR items were much harder for the students than the SR items, the equivalent of a grade level, and appeared even more difficult to them. We interpret this finding to show that in the CR case, the students are hampered by the requirement to write their responses in sentences that communicate their higher-level reasoning and capability. Thus, their facility with expression is a problem only when constructed response items are used to assess student knowledge. We use these results to review the usages of the two item formats and find value for both formats.
About the speakers:
Mark Wilson is a Distinguished Professor of Education at the University of California, Berkeley. He received his PhD degree from the University of Chicago in 1984. His interests focus on measurement and applied statistics, and he has published 167 refereed articles in those areas, 74 invited chapters in edited books, and 15 books. He was elected President of the Psychometric Society, and, more recently, President of the National Council for Measurement in Education (NCME); he is a Member of the US National Academy of Education, a Fellow of the American Educational Research Association, and the American Psychological Association, and also is a National Associate of the US National Research Council. He is Director of the Berkeley Evaluation and Assessment Research (BEAR) Center. His research interests focus on the development and application of sound approaches for measurement in education and the social sciences, the development of statistical models suitable for measurement contexts, the creation of instruments to measure new constructs, and scholarship on the philosophy of measurement.
Other authors: Weeraphat Suksiri; Linda Morell; Jonathan Osborne; Sara Dozier