Student Session Bonanza

November 7, 2023

Tuesday, November 7, 2023

2:00 - 4:00 PM (PST) at Berkeley Way West 1204 and via Zoom

Open to GSE faculty, students, and community.

Request a zoom link from convenors@bear.berkeley.edu

Abstracts:

AI-Assisted feedback for student revisions of short responses by Aubrey Condor

In this session, Aubrey Condor explores how an automatic short answer grading model (ASAG) can be coupled with deep reinforcement learning (RL) to create a system of automatic formative feedback that encourages student engagement in a revision process for open-ended responses. From the presenter: I train an offline RL agent with proximal policy optimization to iteratively revise a student's response by either adding a predefined key phrase or deleting an existing portion of the response. The agent is optimized to alter students’ original responses such that the revised responses achieve a high rating from an ASAG model using the least number of revisions. I examine the practical use of the revision agent by conducting a randomized controlled trial wherein undergraduate students interact with an open-sourced online tutoring platform. Students are prompted to write a short response to an item relating to mathematical problem solving, and those allocated to the RL agent’s intervention group will receive immediate, personalized feedback from the agent. Students are shown the agent’s revised version of their response and are asked to critique the agent's revision before revising their own response. We compare results of the RL intervention with a second intervention group that includes a personalized revision from ChatGPT, and a control group where students receive non-personalized, informative feedback relating to the item. I evaluate students’ active engagement by analyzing their critiques of the feedback, and the efficacy of the feedback for learning gain by both comparing the automatic grade of the student’s original response with the automatic grade of their revised response, and by computing a similarity metric to compare the student’s revised response and the agent’s revised response to ensure that potential learning gain is not a result of copy-paste.

Fine-tuning GPT Models for Auto-Scoring in Educational Measurement by Mingfeng Xue

GPT has shown its capabilities in addressing a wide range of NLP tasks in an efficient manner. In educational measurements, one of the most important and straightforward NLP tasks will be scoring open-ended responses, which require a large investment of time, human effort, and money. Therefore, this presentation aims to show how fine-tuned GPT (i.e., updating parts of the parameters to accommodate new data) can help us in scoring open-ended response items and presents results about scoring accuracy, influence on latent trait estimates, etc. Discussion about future direction and general routine for auto-scoring is also provided.

Presenters:

Aubrey Condor is a PhD candidate at UC Berkeley in the Social Research Methods cluster in the school of Education. Her research interests include applications of machine learning for education and measurement.

Mingfeng Xue is a PhD student at the School of Education in UC Berkeley. His research focuses on applied psychometrics and multilevel modeling, especially the application of IRT. Recently, he has worked on the potential of GPT in facilitating educational measurement. His work has been published in various journals, including Behavior Research Methods and others.