Skip to yearly menu bar Skip to main content

Workshop: Algorithmic Fairness through the Lens of Time

Improving Fairness in Facial Recognition Models with Distribution Shifts

Gianluca Barone · Aashrit Cunchala · Rudy Nunez · Nicole Yang


In this paper, we aim to improve the robustness of the machine learning algorithms in facial recognition when the underlying datasets have distribution shifts. This is particularly important when we design fair algorithms for different demographics and changing environments. A classification algorithm trained on a certain face dataset is sensitive when it shifts to a different face dataset. Even if we have access to enough data, as time goes by, the distribution of the target data may shift due to the aging of the population, the environment, the proportions of demographics, and so on. We first address this issue by providing empirical studies of the out-of-distribution effect on some popular face datasets. Through the exposure of auxiliary datasets and outliers, we provide ways to improve the model performance when training and testing data come from different distributions. Furthermore, class imbalance and distribution shift issues can happen simultaneously. We emphasize the need to consider both and showcase the performance of the model on different face dataset combinations.

Chat is not available.