AgentChangeBench: A Multi-Dimensional Evaluation Framework for Goal-Shift Robustness in Conversational AI
Manik Rana · Calissa Man · Jeffrey Paine · Anotida Expected Msiiwa · Ahan M R · Kevin Zhu · Vasu Sharma · Sunishchal Dev
Abstract
Goal changes are a defining feature of real-world multi-turn interactions, yet current agent benchmarks primarily evaluate static objectives or one-shot tool use. We introduce $\textbf{AgentChangeBench}$, a benchmark explicitly designed to measure how tool augmented language model agents adapt to mid dialogue goal shifts across three enterprise domains. Our framework formalizes evaluation through four complementary metrics: Task Success Rate $(TSR)$ for effectiveness, Tool Use Efficiency $(TUE)$ for reliability, Tool Call Redundancy Rate $(TCRR)$ for wasted effort, and Goal-Shift Recovery Time $(GSRT)$ for adaptation latency. AgentChangeBench comprises $\textbf{2,835}$ task sequences and five user personas, each designed to trigger realistic shift points in ongoing workflows. Using this setup, we evaluate several frontier models and uncover sharp contrasts obscured by traditional pass@k scores: for example, GPT-4o reaches $92.2\$% recovery on airline booking shifts while Gemini collapses to $48.6\$%, and retail tasks show near perfect parameter validity yet redundancy rates above $80\$%, revealing major inefficiencies. These findings demonstrate that high raw accuracy does not imply robustness under dynamic goals, and that explicit measurement of recovery time and redundancy is essential. AgentChangeBench establishes a reproducible testbed for diagnosing and improving agent resilience in realistic enterprise settings.
Chat is not available.
Successful Page Load