Benchmarking Overton Pluralism in LLMs
Elinor Poole-Dayan · Jiayi Wu · Jiaxin Pei · Michiel Bakker
Abstract
We introduce the first framework for measuring Overton pluralism in large language models--the extent to which diverse viewpoints are represented in model outputs. We (i) formalize Overton pluralism as a set-coverage metric (OvertonScore), (ii) conduct a large-scale U.S.-representative human study (N=100; 30 questions; 8 LLMs), and (iii) develop an automated benchmark that reproduces human judgments with high fidelity. Our findings show that while most models achieve comparable pluralism, Gemma 3-27B underperforms and GPT o4-mini achieves the highest OvertonScore. The automated benchmark replicates these human results and generalizes across unseen models, enabling scalable evaluation.
Chat is not available.
Successful Page Load