Timezone: »

A Deep Dive into Dataset Imbalance and Bias in Face Identification
Valeriia Cherepanova · Steven Reich · Samuel Dooley · Hossein Souri · John Dickerson · Micah Goldblum · Tom Goldstein

As the deployment of automated face recognition (FR) systems proliferates, bias in these systems is not just an academic question, but a matter of public concern. Media portrayals often center imbalance as the main source of bias, i.e., that FR models perform worse on images of non-white people or women because these demographic groups are underrepresented in training data. Recent academic research paints a more nuanced picture of this relationship. However, previous studies of data imbalance in FR have focused exclusively on the face verification setting, while the face identification setting has been largely ignored, despite being deployed in sensitive applications such as law enforcement. This is an unfortunate omission, as 'imbalance' is a more complex matter in identification; imbalance may arise in not only the training data, but also the testing data, and furthermore may affect the proportion of identities belonging to each demographic group or the number of images belonging to each identity. In this work, we address this gap in the research by thoroughly exploring the effects of each kind of imbalance possible in face identification, and discuss other factors which may impact bias in this setting.

Author Information

Valeriia Cherepanova (University of Maryland)
Steven Reich (University of Maryland)
Samuel Dooley (Department of Computer Science, University of Maryland, College Park)
Hossein Souri (Johns Hopkins University)
John Dickerson (Arthur AI & University of Maryland)
Micah Goldblum (University of Maryland)
Tom Goldstein (University of Maryland)

More from the Same Authors