Establishing Best Practices in Building Rigorous Agentic Benchmarks
Yuxuan Zhu · Tengjun Jin · Yada Pruksachatkun · Andy Zhang · Shu Liu · Sasha Cui · Sayash Kapoor · Shayne Longpre · Kevin Meng · Rebecca Weiss · Fazl Barez · Rahul Gupta · Jwala Dhamala · Jacob Merizian · Mario Giulianelli · Harry Coppock · Cozmin Ududec · Antony Kellermann · Jasjeet Sekhon · Jacob Steinhardt · Sarah Schwettmann · Arvind Narayanan · Matei A Zaharia · Ion Stoica · Percy Liang · Daniel Kang
Abstract
Benchmarks are essential for quantitatively tracking progress in AI. As AI agents become increasingly capable, researchers and practitioners have introduced agentic benchmarks to evaluate agents on complex, real-world tasks. These benchmarks typically measure agent capabilities by evaluating task outcomes via specific reward designs. However, we show that many agentic benchmarks have issues in task setup or reward design. For example, SWE-bench-Verified uses insufficient test cases, while $\tau$-bench counts empty responses as successes. Such issues can lead to under- or overestimation of agents’ performance by up to 100% in relative terms. To make agentic evaluation rigorous, we introduce the Agentic Benchmark Checklist (ABC), a set of guidelines that we synthesized from our benchmark-building experience, a survey of best practices, and previously reported issues. When applied to CVE-Bench, a benchmark with a particularly complex evaluation design, ABC reduces performance overestimation by 33%.
Successful Page Load