RPGBENCH: Evaluating Large Language Models as Role-Playing Game Engines
Abstract
We present \textsc{RPGBench}, the first benchmark designed to evaluate large language models (LLMs) as text-based role-playing game (RPG) engines. RPGBench comprises two core tasks: Game Creation (GC) and Game Simulation (GS). In GC, an LLM must craft a valid and playable RPG world using a structured event-state representation, ensuring logical coherence and proper termination conditions. In GS, the LLM simulates interactive gameplay across multiple rounds while consistently updating states and enforcing game rules. To comprehensively assess performance, RPGBench integrates objective and subjective evaluation methodologies. Objective measures verify adherence to event mechanics and check variable updates without requiring human intervention. Subjective measures—such as content interestingness, action quality, and role-playing capability—are evaluated via an LLM-as-a-judge framework, where a strong LLM grades each candidate’s outputs. GC evaluation is fully objective and also used to filter game prompts for the GS task. This design facilitates a scalable pipeline to create GS environments. Empirical results demonstrate that state-of-the-art LLMs can produce engaging stories but often struggle to implement consistent, verifiable game mechanics, particularly in long complex scenarios. By combining structured, rule-based assessments with LLM-based judgments, RPGBench provides a new standard for evaluating how well LLMs can balance creativity, coherence, and complexity in text RPGs, opening avenues for immersive and controllable interactive storytelling.