Timezone: »

Can I Trust My Fairness Metric? Assessing Fairness with Unlabeled Data and Bayesian Inference
Disi Ji · Padhraic Smyth · Mark Steyvers

Thu Dec 10 09:00 AM -- 11:00 AM (PST) @ Poster Session 5 #1545

Group fairness is measured via parity of quantitative metrics across different protected demographic groups. In this paper, we investigate the problem of reliably assessing group fairness metrics when labeled examples are few but unlabeled examples are plentiful. We propose a general Bayesian framework that can augment labeled data with unlabeled data to produce more accurate and lower-variance estimates compared to methods based on labeled data alone. Our approach estimates calibrated scores (for unlabeled examples) of each group using a hierarchical latent variable model conditioned on labeled examples. This in turn allows for inference of posterior distributions for an array of group fairness metrics with a notion of uncertainty. We demonstrate that our approach leads to significant and consistent reductions in estimation error across multiple well-known fairness datasets, sensitive attributes, and predictive models. The results clearly show the benefits of using both unlabeled data and Bayesian inference in assessing whether a prediction model is fair or not.

Author Information

Disi Ji (University of California, Irvine)
Padhraic Smyth (University of California, Irvine)
Mark Steyvers (UC Irvine)

More from the Same Authors