Skip to yearly menu bar Skip to main content


Poster

Benchmark Repositories for Better Benchmarking

Rachel Longjohn · Markelle Kelly · Sameer Singh · Padhraic Smyth

West Ballroom A-D #5205
[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

In machine learning research, it is common to evaluate algorithms via their performance on standard benchmark datasets. While a growing body of work establishes guidelines for---and levies criticisms at---data and benchmarking practices in machine learning, comparatively less attention has been paid to the repositories where these datasets are stored, documented, and shared. In this paper, we analyze the landscape of these benchmark repositories and the role they can play in improving benchmarking. This role includes addressing issues with both datasets themselves (e.g., representational harms, construct validity) and the manner in which evaluation is carried out using such datasets (e.g., overemphasis on a few datasets and metrics, lack of reproducibility). To this end, we identify and discuss a set of considerations surrounding the design and use of benchmark repositories, with a focus on improving benchmarking practices in machine learning.

Live content is unavailable. Log in and register to view live content