Timezone: »
There is a tendency across different subfields in AI to see value in a small collection of influential benchmarks, which we term 'general' benchmarks. These benchmarks operate as stand-ins or abstractions for a range of anointed common problems that are frequently framed as foundational milestones on the path towards flexible and generalizable AI systems. State-of-the-art performance on these benchmarks is widely understood as indicative of progress towards these long-term goals. In this position paper, we explore how such benchmarks are designed, constructed and used in order to reveal key limitations of their framing as the functionally 'general' broad measures of progress they are set up to be.
Author Information
Deborah Raji (UC Berkeley)
Remi Denton (Google)

Remi Denton (they/them) is a Staff Research Scientist at Google, within the Technology, AI, Society, and Culture team, where they study the sociocultural impacts of AI technologies and conditions of AI development. Prior to joining Google, Remi received their PhD in Computer Science from the Courant Institute of Mathematical Sciences at New York University, where they focused on unsupervised learning and generative modeling of images and video. Prior to that, they received their BSc in Computer Science and Cognitive Science at the University of Toronto. Though trained formally as a computer scientist, Remi draws ideas and methods from multiple disciplines and is drawn towards highly interdisciplinary collaborations, in order to examine AI systems from a sociotechnical perspective. Remi’s recent research centers on emerging text- and image-based generative AI, with a focus on data considerations and representational harms. Remi published under the name "Emily Denton".
Emily M. Bender (University of Washington)
Alex Hanna (Google)
Amandalynne Paullada (University of Washington)
More from the Same Authors
-
2021 : Reduced, Reused and Recycled: The Life of a Dataset in Machine Learning Research »
Bernard Koch · Remi Denton · Alex Hanna · Jacob G Foster -
2021 : Artsheets for Art Datasets »
Ramya Srinivasan · Remi Denton · Jordan Famularo · Negar Rostamzadeh · Fernando Diaz · Beth Coleman -
2021 : Are We Learning Yet? A Meta Review of Evaluation Failures Across Machine Learning »
Thomas Liao · Rohan Taori · Deborah Raji · Ludwig Schmidt -
2023 : Developing A Conceptual Framework for Analyzing People in Unstructured Data »
Mark Díaz · Sunipa Dev · Emily Reif · Remi Denton · Vinodkumar Prabhakaran -
2023 : Grounded Evaluations for Assessing Real-World Harms »
Deborah Raji -
2022 Poster: Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding »
Chitwan Saharia · William Chan · Saurabh Saxena · Lala Li · Jay Whang · Remi Denton · Kamyar Ghasemipour · Raphael Gontijo Lopes · Burcu Karagol Ayan · Tim Salimans · Jonathan Ho · David Fleet · Mohammad Norouzi -
2022 Social: Ethics Review - Open Discussion »
Deborah Raji · William Isaac · Cherie Poland · Alexandra Luccioni -
2021 : Evaluation as a Process for Engineering Responsibility in AI »
Deborah Raji -
2021 : Live panel: ImageNets of "x": ImageNet's Infrastructural Impact »
Remi Denton · Alex Hanna -
2021 : ImageNets of "x": ImageNet's Infrastructural Impact »
Remi Denton · Alex Hanna -
2021 : Career and Life: Panel Discussion - Bo Li, Adriana Romero-Soriano, Devi Parikh, and Emily Denton »
Remi Denton · Devi Parikh · Bo Li · Adriana Romero -
2021 : Reduced, Reused and Recycled: The Life of a Dataset in Machine Learning Research »
Bernard Koch · Remi Denton · Alex Hanna · Jacob G Foster -
2020 : How should researchers engage with controversial applications of AI? »
Logan Koepke · CATHERINE ONEIL · Tawana Petty · Cynthia Rudin · Deborah Raji · Shawn Bushway -
2020 : Harms from AI research »
Anna Lauren Hoffmann · Nyalleng Moorosi · Vinay Prabhu · Deborah Raji · Jacob Metcalf · Sherry Stanley -
2020 Workshop: Navigating the Broader Impacts of AI Research »
Carolyn Ashurst · Rosie Campbell · Deborah Raji · Solon Barocas · Stuart Russell -
2020 : Data and its (dis)contents: A survey of dataset development and use in machine learning research »
Amandalynne Paullada -
2020 : AI and the Everything in the Whole Wide World Benchmark »
Deborah Raji -
2020 : Invited Talk 3: Inioluwa Deborah Raji »
Deborah Raji -
2020 : Panel »
Kilian Weinberger · Maria De-Arteaga · Shibani Santurkar · Jonathan Frankle · Deborah Raji -
2019 : Emily Bender (University of Washington) "Making Stakeholder Impacts Visible in the Evaluation Cycle: Towards Fairness-Integrated Shared Tasks and Evaluation Metrics" »
Emily M. Bender -
2019 : AI's Blindspots and Where to Find Them »
Deborah Raji