Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Generative AI for Education (GAIED): Advances, Opportunities, and Challenges

Paper 17: Towards AI-Assisted Multiple Choice Question Generation and Quality Evaluation at Scale: Aligning with Bloom’s Taxonomy

Kevin Hwang · Sai Challagundla · Maryam Alomair · Karen Chen · Fow-Sen Choa · Kevin Hwang

Keywords: [ artificial intelligence ] [ Large language models ] [ Automated Question Generation ] [ Bloom’s Taxonomy ] [ GPT-3.5 ]


Abstract:

In educational assessment, Multiple Choice Questions (MCQs) are frequently used due to their efficiency in grading and providing feedback. However, manual MCQ generation encounters challenges. Relying on a limited set of questions may lead to item repetition, which could compromise the reliability of assessments and the security of the evaluation procedure, especially in high-stakes evaluations. This study explores an AI-driven approach to creating and evaluating MCQs in introductory chemistry and biology. The methodology involves generating Bloom's Taxonomy-aligned questions through zero-shot prompting with GPT-3.5, validating question alignment with Bloom’s Taxonomy with RoBERTa--a language model grounded in transformer architecture, employing self-attention mechanisms to handle input sequences and produce context-aware representations of individual words within a given sentence--, evaluating question quality using Item Writing Flaws (IWF)--issues that can arise in the creation of test items or questions--, and validating questions using subject matter experts. Our research demonstrates GPT-3.5's capacity to produce higher-order thinking questions, particularly at the "evaluation" level. We observe alignment between GPT-generated questions and human-assessed complexity, albeit with occasional disparities. Question quality assessment reveals differences between human and machine evaluations, correlating inversely with Bloom's Taxonomy levels. These findings shed light on automated question generation and assessment, presenting the potential for advancements in AI-driven educational evaluation methods.

Chat is not available.