Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Generative AI for Education (GAIED): Advances, Opportunities, and Challenges

Paper 10: Automated Distractor and Feedback Generation for Math Multiple-choice Questions via In-context Learning

Hunter McNichols · Wanyong Feng · Jaewook Lee · Alexander Scarlatos · Digory Smith · Simon Woodhead · Andrew Lan · Alexander Scarlatos

Keywords: [ Natural Language Processing ] [ Large language models ] [ Personalized Education ] [ artificial intelligence ]


Abstract:

Multiple-choice questions (MCQs) are ubiquitous in almost all levels of education since they are easy to administer, grade, and are a reliable form of assessment. An important aspect of MCQs is the distractors, i.e., incorrect options that are designed to target specific misconceptions or insufficient knowledge among students. To date, the task of crafting high-quality distractors has largely remained a labor-intensive process for teachers and learning content designers, which has limited scalability. In this work, we explore the task of automated distractor and corresponding feedback message generation in math MCQs using large language models. We establish a formulation of these two tasks and propose a simple, in-context learning-based solution. Moreover, we propose generative AI-based metrics for evaluating the quality of the feedback messages. We conduct extensive experiments on these tasks using a real-world MCQ dataset. Our findings suggest that there is a lot of room for improvement in automated distractor and feedback generation; based on these findings, we outline several directions for future work.

Chat is not available.