Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Algorithmic Fairness through the Lens of Causality and Privacy

Practical Approaches for Fair Learning with Multitype and Multivariate Sensitive Attributes

Tennison Liu · Alex Chan · Boris van Breugel · Mihaela van der Schaar


Abstract:

It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences. Fair ML has largely focused on the protection of single attributes in the simpler setting where both attributes and target outcomes are binary. However, the practical application in many a real-world problem entails the simultaneous protection of multiple sensitive attributes, which are often not simply binary, but continuous or catagorical. To address this more challenging task, we introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces. This leads to two practical tools: first, the FairCOCCO Score, a normalised metric that can quantify fairness in settings with single or multiple sensitive attributes of arbitrary type; and second, a subsequent regularisation term that can be incorporated into arbitrary learning objectives to obtain fair predictors. These contributions address crucial gaps in the algorithmic fairness literature, and we empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on both synthetic and real-world datasets.

Chat is not available.